diff --git "a/https:/huggingface.co/datasets/iamgroot42/mimir/tree/main/test/temporal_arxiv_2021_01.jsonl" "b/https:/huggingface.co/datasets/iamgroot42/mimir/tree/main/test/temporal_arxiv_2021_01.jsonl" new file mode 100644--- /dev/null +++ "b/https:/huggingface.co/datasets/iamgroot42/mimir/tree/main/test/temporal_arxiv_2021_01.jsonl" @@ -0,0 +1,1000 @@ +"---\nabstract: |\n In this paper, we investigate risk measures such as value at risk (VaR) and the conditional tail expectation (CTE) of the extreme (maximum and minimum) and the aggregate (total) of two dependent risks. In finance, insurance and the other fields, when people invest their money in two or more dependent or independent markets, it is very important to know the extreme and total risk before the investment. To find these risk measures for dependent cases is quite challenging, which has not been reported in the literature to the best of our knowledge. We use the FGM copula for modelling the dependence as it is relatively simple for computational purposes and has empirical successes. The marginal of the risks are considered as exponential and pareto, separately, for the case of extreme risk and as exponential for the case of the total risk. The effect of the degree of dependency on the VaR and CTE of the extreme and total risks is analyzed. We also make comparisons for the dependent and independent risks. Moreover, we propose a new risk measure called median of tail (MoT) and investigate MoT for the extreme and aggregate dependent risks.\\\n **Keywords:** Dependent risk measures;" +"---\nabstract: |\n Let $\\mathcal{C}(n,k)$ be the set of $k$-dimensional simplicial complexes $C$ over a fixed set of $n$ vertices such that:\n\n 1. $C$ has a complete $k-1$-skeleton;\n\n 2. $C$ has precisely ${{n-1}\\choose {k}}$ $k$-faces;\n\n 3. the homology group $H_{k-1}(C)$ is finite.\n\n Consider the probability measure on $\\mathcal{C}(n,k)$ where the probability of a simplicial complex $C$ is proportional to $|H_{k-1}(C)|^2$. For any fixed $k$, we determine the local weak limit of these random simplicial complexes as $n$ tends to infinity.\n\n This local weak limit turns out to be the same as the local weak limit of the $1$-out $k$-complexes investigated by Linial and Peled.\nauthor:\n- Andr\u00e1s M\u00e9sz\u00e1ros\nbibliography:\n- 'references.bib'\ntitle: 'The local weak limit of $k$-dimensional hypertrees'\n---\n\nIntroduction\n============\n\nWe consider the probability measure $\\nu_{n,k}$ on the set of $k$-dimensional hypertrees over a fixed set of $n$ vertices where the probability of a hypertree $C$ is proportional to $|H_{k-1}(C)|^2$. Let\u00a0$C_{n,k}$ be a random hypertree of law $\\nu_{n,k}$. The random bipartite graph $G_{n,k}$ is defined as the Hasse diagram of $C_{n,k}$ restricted to the faces of dimension $k$ and $k-1$.\n\nThe main result of this paper is the following.\n\n\\[thmMain\\] For a fixed $k$, the local weak" +"---\nabstract: 'We classify $n$-hereditary monomial algebras in three natural contexts: First, we give a classification of the $n$-hereditary truncated path algebras. We show that they are exactly the $n$-representation-finite Nakayama algebras classified by Vaso. Next, we classify partially the $n$-hereditary quadratic monomial algebras. In the case $n=2$, we prove that there are only two examples, provided that the preprojective algebra is a planar quiver with potential. The first one is a Nakayama algebra and the second one is obtained by mutating $\\mathbb A_3\\otimes_k \\mathbb A_3$, where $\\mathbb A_3$ is the Dynkin quiver of type $A$ with bipartite orientation. In the case $n\\geq 3$, we show that the only $n$-representation finite algebras are the $n$-representation-finite Nakayama algebras with quadratic relations.'\naddress:\n- 'Institutt for matematiske fag, NTNU, 7491 Trondheim, Norway'\n- 'Institutt for matematiske fag, NTNU, 7491 Trondheim, Norway'\nauthor:\n- Mads Hustad Sand\u00f8y\n- 'Louis-Philippe Thibault'\nbibliography:\n- 'bib\\_quiver\\_properties\\_hereditary.bib'\ntitle: 'Classification results for $n$-hereditary monomial algebras'\n---\n\n[^1] [^2]\n\nIntroduction\n============\n\nAuslander\u2013Reiten theory has proven to be a central tool in the study of the representation theory of Artin algebras [@ARS97]. In 2004, Iyama introduced a generalisation of some of the key concepts to a \u2018higher-dimensional\u2019 paradigm [@Iya07b; @Iya07]." +"---\nabstract: 'Numerically solving a second quantised many-body model in the permutation symmetric Fock space can be challenging for two reasons: (*i*) an increased complication in the calculations of the matrix elements of various operators, and (*ii*) a poor scaling of the cost of these calculations with the Fock space size. We present a method that solves both these problems. We find a mapping that can be used to simplify the calculations of the matrix elements. The mapping is directly generated so its computational cost scales only linearly with the space size and is negligible even for large enough sizes that approach the thermodynamic limit. A fortran implementation of the method as a library \u2013 FockMap \u2013 is provided along with a test program.'\naddress: 'Department of Physics, Quaid-i-Azam University, Islamabad 45320, Pakistan'\nauthor:\n- 'M. Ahsan Zeb'\ntitle: Efficient linear scaling mapping for permutation symmetric Fock spaces\n---\n\nOrder N, second quantised, Fock space, permutation symmetric.\n\n[**PROGRAM SUMMARY**]{}\n\n[*Program Title: FockMap*]{}\\\n[*Licensing provisions: GPLv3*]{}\\\n[*Programming language: FORTRAN*]{}\\\n[*Nature of problem: Solving second quantised many-body models in permutation symmetric Fock space*]{}\\\n[*Solution method: A mapping between the Fock states exists that can be used to calculate the matrix elements of" +"---\nabstract: 'In Euclidean $3$-space, it is well known that the Sine-Gordon equation was considered in the nineteenth century in the course of investigations of surfaces of constant Gaussian curvature $K=-1$. Such a surface can be constructed from a solution to the Sine-Gordon equation, and vice versa. With this as motivation, employing the fundamental theorem of surfaces in the Heisenberg group $H_{1}$, we show in this paper that the existence of a constant $p$-mean curvature surface (without singular points) is equivalent to the existence of a solution to a nonlinear second-order ODE , which is a kind of [**Li\u00e9nard equations**]{}. Therefore, we turn to investigate this equation. It is a surprise that we give a complete set of solutions to (or ), and hence use the types of the solution to divide constant $p$-mean curvature surfaces into several classes. As a result, after a kind of normalization, we obtain a representation of constant $p$-mean curvature surfaces and classify further all constant $p$-mean curvature surfaces. In Section \\[appcon\\], we provide an approach to construct $p$-minimal surfaces. It turns out that, in some sense, generic $p$-minimal surfaces can be constructed via this approach. Finally, as a derivation, we recover the Bernstein-type theorem" +"---\nabstract: 'We report on the experimental characterization of a spatially extended Josephson junction realized with a coherently-coupled two-spin-component Bose-Einstein condensate. The cloud is trapped in an elongated potential such that that transverse spin excitations are frozen. We extract the non-linear parameter with three different manipulation protocols. The outcomes are all consistent with a simple local density approximation of the spin hydrodynamics, i.e., of the so-called Bose-Josephson junction equations. We also identify a method to produce states with a well defined uniform magnetization.'\nauthor:\n- 'A. Farolfi'\n- 'A. Zenesini'\n- 'R. Cominotti'\n- 'D. Trypogeorgos'\n- 'A. Recati'\n- 'G. Lamporesi'\n- 'G. Ferrari'\ntitle: Manipulation of an elongated internal Josephson junction of bosonic atoms\n---\n\nIntroduction\n============\n\nOne of the macroscopic quantum effects observed in superconducting circuits and superfluid helium is the Josephson effect [@Josephson1962; @Sato2019], arising when two superconducting leads are coupled via tunneling effect through a thin insulating layer.\n\nAn analogous effect has also been observed in atomic Bose-Einstein condensates (BECs). In this context, the coupling has been experimentally realized mainly in two different ways. In the first case (external coupling), two BECs are spatially separated by a thin potential barrier that allows for tunneling [@Albiez2005]." +"---\nabstract: 'We develop an algebro-geometric formulation for neural networks in machine learning using the moduli space of framed quiver representations. We find natural Hermitian metrics on the universal bundles over the moduli which are compatible with the GIT quotient construction by the general linear group, and show that their Ricci curvatures give a K\u00e4hler metric on the moduli. Moreover, we use toric moment maps to construct activation functions, and prove the universal approximation theorem for the multi-variable activation function constructed from the complex projective space.'\naddress:\n- 'Department of Mathematics and Statistics, Boston University, 111 Cummington Mall, Boston MA 02215, USA'\n- 'Department of Mathematics and Statistics, Boston University, 111 Cummington Mall, Boston MA 02215, USA'\nauthor:\n- George Jeffreys\n- 'Siu-Cheong Lau'\nbibliography:\n- 'geometry.bib'\ntitle: K\u00e4hler geometry of quiver varieties and machine learning\n---\n\nIntroduction\n============\n\nMachine learning by artificial neural networks has made exciting developments and has been applied to many branches of science in recent years. Mathematically, stochastic gradient flow over a matrix space (or called the weight space) is the central tool. The non-convex nature of the cost function has made the problem very interesting. Current research has focused on different types of stochastic" +"[**** ]{}\\\nUpekha Delay^1,2^, Thoshara Nawarathne^1^, Sajan Dissanayake^1,3^, Samitha Gunarathne^1^, Thanushi Withanage^1^, Roshan Godaliyadda^1^, Chathura Rathnayake^2^, Parakrama Ekanayake^1^, Janaka Wijayakulasooriya^1^,\\\n**1** Department of Electrical and Electronic Engineering, Faculty of Engineering, University of Peradeniya, Peradeniya \\[20400\\], Sri Lanka.\\\n**2** Department of Obstetrics and Gynacology, Faculty of Medicine, University of Peradeniya, Peradeniya \\[20400\\], Sri Lanka.\\\n\nThese authors contributed equally to this work.\n\n\\* Corresponding author. upekha.delay@eng.pdn.ac.lk\n\nAbstract {#abstract .unnumbered}\n========\n\nFetal movement count monitoring is one of the most commonly used methods of assessing fetal well-being. While few methods are available to monitor fetal movements, they consist of several adverse qualities such as unreliability as well as the inability to be conducted in a non-clinical setting. Therefore, this research was conducted to design a complete system that will enable pregnant mothers to monitor fetal movement at home. This system consists of a non-invasive, non-transmitting sensor unit that can be fabricated at a low cost. An accelerometer was utilized as the primary sensor and a micro-controller based circuit was implemented. Clinical testing was conducted utilizing this sensor unit. Two phases of clinical testing procedures were done and readings from more than 120 pregnant mothers were taking. Validation was done by conducting an abdominal" +"---\nabstract: 'Together with the recent advances in semantic segmentation, many domain adaptation methods have been proposed to overcome the domain gap between training and deployment environments. However, most previous studies use limited combinations of source/target datasets, and domain adaptation techniques have never been thoroughly evaluated in a more challenging and diverse set of target domains. This work presents a new multi-domain dataset [DRIV100]{}\u00a0for benchmarking domain adaptation techniques on in-the-wild road-scene videos collected from the Internet. The dataset consists of pixel-level annotations for 100 videos selected to cover diverse scenes/domains based on two criteria; human subjective judgment and an anomaly score judged using an existing road-scene dataset. We provide multiple manually labeled ground-truth frames for each video, enabling a thorough evaluation of video-level domain adaptation where each video independently serves as the target domain. Using the dataset, we quantify domain adaptation performances of state-of-the-art methods and clarify the potential and novel challenges of domain adaptation techniques. The dataset is available at .'\nauthor:\n- |\n Haruya Sakashita[^1]\\\n Osaka University\\\n [sakashita.haruya@ist.osaka-u.ac.jp]{}\n- |\n Christoph Flothow\\\n Technische Universit\u00e4t Darmstadt\\\n [christoph.flothow@stud.tu-darmstadt.de]{}\n- |\n Noriko Takemura\\\n Osaka University\\\n [takemura@ids.osaka-u.ac.jp]{}\n- |\n Yusuke Sugano\\\n The University of Tokyo\\\n [sugano@iis.u-tokyo.ac.jp]{}\nbibliography:\n- 'egbib.bib'\ntitle: |" +"---\nabstract: 'Recurrent neural networks are machine learning algorithms which are suited well to predict time series. Echo state networks are one specific implementation of such neural networks that can describe the evolution of dynamical systems by supervised machine learning without solving the underlying nonlinear mathematical equations. In this work, we apply an echo state network to approximate the evolution of two-dimensional moist Rayleigh-B\u00e9nard convection and the resulting low-order turbulence statistics. We conduct long-term direct numerical simulations in order to obtain training and test data for the algorithm. Both sets are pre-processed by a Proper Orthogonal Decomposition (POD) using the snapshot method to reduce the amount of data. Training data comprise long time series of the first 150 most energetic POD coefficients. The reservoir is subsequently fed by these data and predicts of future flow states. The predictions are thoroughly validated by original simulations. Our results show good agreement of the low-order statistics. This incorporates also derived statistical moments such as the cloud cover close to the top of the convection layer and the flux of liquid water across the domain. We conclude that our model is capable of learning complex dynamics which is introduced here by the tight interaction" +"---\nabstract: |\n Vibro-tactile feedback is, by far the most common haptic interface in wearable or touchable devices. This feedback can be amplified by controlling the wave propagation characteristics in devices, by utilizing phenomena such as structural resonance. However, much of the work in vibro-tactile haptics has focused on amplifying local displacements in a structure by increasing local compliance. In this paper, we show that engineering the resonance mode shape of a structure with embedded localized mass amplifies the displacements without compromising on the stiffness or resonance frequency. The resulting structure, i.e., a *tuned mass amplifier*, produces higher tactile forces (7.7 times) compared to its counterpart without a mass, while maintaining a low frequency. We optimize the proposed design using a combination of a neural network and sensitivity analysis, and validate the results with experiments on 3-D printed structures. We also study the performance of the device on contact with a soft material, to evaluate the interaction with skin. Potential avenues for future work are also presented, including small form factor wearable haptic devices and remote haptics.\\\n \\\n Keywords: *Vibration, Haptics, Optimal design, Deep learning, 3-D printing*\nauthor:\n- 'Sai Sharan Injeti[^1]'\n- Ali Israr\n- Tianshu Liu\n- Yi\u011fit" +"Introduction\n============\n\nExchange bias effect is a phenomenon occurring at the interface between a\u00a0ferromagnet (FM) and an antiferromagnet (AFM). The exchange coupling occurs after cooling such a FM/AFM system in the external magnetic field below the N\u00e9el temperature of the AFM, giving rise to the magnetic hysteresis loop shift along the field axis.[@Kiw01JMMM; @Nog99JMMM] The shift is called the exchange bias field $H_{\\mathrm{ex}}$ and its magnitude is inversely proportional to the thickness of the FM material revealing the interfacial nature of the effect. In most cases the bias field decreases monotonically with increasing temperature to the field $H_{\\mathrm{ex}}=0$ for the blocking temperature for exchange bias. However, there are also systems where the bias field first increases, and then drops down as temperature rises.[@Shi18JAP] Usually, the blocking temperature is lower than the N\u00e9el temperature of the AFM due to the structural imperfections present in FM and AFM materials, as well as the condition of the interface. To describe the exchange bias phenomenon various models have been applied considering the magnetic domains in the antiferromagnet,[@Mil00PRL] the role of uncompensated spins [@Tak97PRL] as well as the roughness of the FM/AFM interface.[@Mal87PRB]\n\nThe possible technological application of the exchange bias effect has been" +"---\nabstract: 'We study a vortex in a nanostripe of an antiferromagnet with easy-plane anisotropy and interfacial Dzyloshinskii-Moriya interaction. The vortex has hybrid chirality being N\u00e9el close to its center and Bloch away from it. Propagating vortices can acquire velocities up to a maximum value that is lower than the spin wave velocity. When the vortex is forced to exceed the maximum velocity, phase transitions occur to a nonflat spiral, vortex chain, and flat spiral, successively. The vortex chain is a topological configuration stabilised in the stripe geometry. Theoretical arguments lead to the general result that the velocity of localized excitations in chiral magnets cannot reach the spin wave velocity.'\nauthor:\n- Riccardo Tomasello\n- Stavros Komineas\ntitle: Vortex propagation and phase transitions in a chiral antiferromagnetic nanostripe\n---\n\nIntroduction {#sec:intro}\n============\n\nA wide range of materials present antiferromagnetic order, where neighboring magnetic moments are coupled via a strong exchange interaction and are aligned antiparallel. Antiferromagnets (AFMs) exhibit features, such as low magnetic susceptibility, robustness against external fields and lack of stray fields, that are favorable for the building blocks of spintronic devices [@Jungwirth2016; @Baltz2018]. They receive renewed interest because current techniques allow for the antiferromagnetic order to be manipulated" +"---\nabstract: 'The modeling of non-local-thermodynamic-equilibrium plasmas is crucial for many aspects of high-energy-density physics. It often requires collisional-radiative models coupled with radiative-hydrodynamics simulations. Therefore, there is a strong need for fast and as accurate as possible calculations of the cross-sections and rates of the different collisional and radiative processes. We present an analytical approach for the computation of the electron-impact excitation (EIE) cross-sections in the Plane Wave Born (PWB) approximation. The formalism relies on the screened hydrogenic model. The EIE cross-section is expressed in terms of integrals, involving spherical Bessel functions, which can be calculated analytically. In order to remedy the fact that the PWB approximation is not correct at low energy (near threshold), we consider different correcting factors (Elwert-Sommerfeld, Cowan-Robb, Kilcrease-Brookes). We also investigate the role of plasma density effects such as Coulomb screening and quantum degeneracy on the EIE rate. This requires to integrate the collision strength multiplied by the Fermi-Dirac Distribution and the Pauli blocking factor. We show that, using an analytical fit often used in collisional-radiative models, the EIE rate can be calculated accurately without any numerical integration, and compare our expression with a correction factor presented in a recent work.'\n---\n\nSimple electron-impact excitation" +"---\nabstract: 'Recently, a new assumption was proposed in \\[Phys. Rev. D 100, no.10, 104022 (2019)\\]. This assumption considers that the energy of the particle changes the enthalpy of the black hole after throwing the particle into the black hole. Using the energy-momentum relation, the results show that the second law of thermodynamics of the black hole is valid in extended phase space. In this paper, we discuss the validity of the laws of thermodynamics and the stability of the horizon of the charged AdS black hole by scalar field scattering under two assumptions, i.e., the energy flux of the scalar field $dE$ changes the internal energy of the black hole $dU$ and the energy flux of the scalar field $dE$ changes the enthalpy of the black hole $dM$.'\nauthor:\n- 'Benrong Mu$^{a,b}$'\n- 'Jing Liang$^{a,b}$'\n- 'Xiaobo Guo$^{c}$'\ntitle: Thermodynamics with pressure and volume of black holes based on two assumptions under scalar field scattering\n---\n\nIntroduction\n============\n\nSince Bardeen et al. discovered that the black hole parameters satisfy equations similar to the laws of thermodynamics [@intro-Bardeen:1973gs], black hole mechanics was gradually replaced by black hole thermodynamics. Black hole thermodynamics has gradually attracted attention.\n\nFor a RN-AdS black hole," +"---\nabstract: 'The key to realizing fault-tolerant quantum computation for singlet-triplet (ST) qubits in semiconductor double quantum dot (DQD) is to operate both the single- and two-qubit gates with high fidelity. The feasible way includes operating the qubit near the transverse sweet spot (TSS) to reduce the leading order of the noise, as well as adopting the proper pulse sequences which are immune to noise. The single-qubit gates can be achieved by introducing an AC drive on the detuning near the TSS. The large dipole moment of the DQDs at the TSS has enabled strong coupling between the qubits and the cavity resonator, which leads to a two-qubit entangling gates. When operating in the proper region and applying modest pulse sequences, both single- and two-qubit gates are having fidelity higher than 99%. Our results suggest that taking advantage of the appropriate pulse sequences near the TSS can be effective to obtain high-fidelity ST qubits.'\nauthor:\n- 'Wen-Xin Xie'\n- Chengxian Zhang\n- 'Zheng-Yuan Xue'\nbibliography:\n- 'refs\\_GQG.bib'\ntitle: 'Universal singlet-triplet qubits implemented near the transverse sweet spot'\n---\n\nIntroduction\n============\n\nQuantum computing using the spin states of the electrons confined in the semiconductor quantum dots [@Loss.98; @Petta.05; @Hanson.07; @Zimmerman.14; @Bermeister.14;" +"---\nabstract: 'Pronunciation modeling is a key task for building speech technology in new languages, and while solid grapheme-to-phoneme (G2P) mapping systems exist, language coverage can stand to be improved. The information needed to build G2P models for many more languages can easily be found on Wikipedia, but unfortunately, it is stored in disparate formats. We report on a system we built to mine a pronunciation data set in 819 languages from loosely structured tables within Wikipedia. The data includes phoneme inventories, and for 63 low-resource languages, also includes the grapheme-to-phoneme (G2P) mapping. 54 of these languages do not have easily findable G2P mappings online otherwise. We turned the information from Wikipedia into a structured, machine-readable TSV format, and make the resulting data set publicly available so it can be improved further and used in a variety of applications involving low-resource languages.'\nauthor:\n- |\n Tania Chakraborty, Manasa Prasad, Theresa Breiner, Sandy Ritchie, Daan van Esch\\\n Google Research\\\n [{taniarini, pbmanasa, tbreiner, sandyritchie, dvanesch}@google.com]{}\ndate: June 2020\ntitle: 'Mining Large-Scale Low-Resource Pronunciation Data From Wikipedia'\n---\n\nIntroduction {#intro}\n============\n\nThere are thousands of languages spoken around the world, and many efforts to learn about them and document them. However, information about" +"---\nabstract: 'Recent studies in the field of Machine Translation (MT) and Natural Language Processing (NLP) have shown that existing models amplify biases observed in the training data. The amplification of biases in language technology has mainly been examined with respect to specific phenomena, such as gender bias. In this work, we go beyond the study of gender in MT and investigate how bias amplification might affect language in a broader sense. We hypothesize that the \u2018algorithmic bias\u2019, i.e. an exacerbation of frequently observed patterns in combination with a loss of less frequent ones, not only exacerbates societal biases present in current datasets but could also lead to an artificially impoverished language: \u2018machine translationese\u2019. We assess the linguistic richness (on a lexical and morphological level) of translations created by different data-driven MT paradigms \u2013 phrase-based statistical (PB-SMT) and neural MT (NMT). Our experiments show that there is a loss of lexical and morphological richness in the translations produced by all investigated MT paradigms for two language pairs (EN$\\leftrightarrow$FR and EN$\\leftrightarrow$ES).'\nauthor:\n- |\n Eva Vanmassenhove$^\\alpha$\\\n Dimitar Shterionov$^\\alpha$\\\n $^\\alpha$ Cognitive Science and AI, Tilburg University, The Netherlands\\\n [ `{e.o.j.vanmassenhove, d.shterionov}``@tilburguniversity.edu`]{}\\\n $^\\beta$ University of Maryland, College Park\\\n [ `mgwillia@umd.edu`]{} Matthew Gwilliam$^\\beta$\nbibliography:" +"---\nabstract: 'Negative Biased Temperature Instability (NBTI)-induced aging is one of the critical reliability threats in nano-scale devices. This paper makes the first attempt to study the NBTI aging in the on-chip weight memories of deep neural network (DNN) hardware accelerators, subjected to complex DNN workloads. We propose DNN-Life, a specialized aging analysis and mitigation framework for DNNs, which jointly exploits hardware- and software-level knowledge to improve the lifetime of a DNN weight memory with reduced energy overhead. At the software-level, we analyze the effects of different DNN quantization methods on the distribution of the bits of weight values. Based on the insights gained from this analysis, we propose a micro-architecture that employs low-cost memory-write (and read) transducers to achieve an optimal duty-cycle at run time in the weight memory cells, thereby balancing their aging. As a result, our DNN-Life framework enables efficient aging mitigation of weight memory of the given DNN hardware at minimal energy overhead during the inference process.'\nauthor:\n- \nbibliography:\n- 'biblio.bib'\ntitle: 'DNN-Life: An Energy-Efficient Aging Mitigation Framework for Improving the Lifetime of On-Chip Weight Memories in Deep Neural Network Hardware Architectures '\n---\n\nIntroduction {#Sec1:Introduction}\n============\n\nDNN accelerators have already become an essential part" +"[**Energy and mass dependencies for the characteristics of $p_T$ regions observed at LHC energies.** ]{}\n\nMais Suleymanov\n\n[****]{} Baku State University\\\nZ. Khalilov 23, Baku Azerbijan\\\n\\* mais.suleymanov@bsu.edu.az\n\nAbstract {#abstract .unnumbered}\n========\n\n[**The $p_T$ distributions of the $K^0$- and $\\phi$ - mesons produced in the $pp$ collisions at $\\sqrt{s}=2.76$$TeV$ have been analyzed by fitting them using the exponential function. It was observed that the distributions contain several $p_T$ regions similar to the cases with the charged particles, $\\pi^0$- and $\\eta$- mesons produced in the same events. These regions could be characterized using three variables: the length of the region $L^{c}_K $ and free fitting parameters $a^{c}_K $ and $b^{c}_K $. It was observed that the values of the parameters as a function of energy grouped around certain lines and there are jump-like changes. These observations together with the effect of existing the several $p_T$ regions can say on discrete energy dependencies for the $L^{c}_K $ , $a^{c}_K $ and $b^{c}_K $. The lengths of the regions increase with the mass of the particles. This increase gets stronger with energy. The mass dependencies of the parameters $a^{c}_K $ and $b^{c}_K $ show a regime change at a mass $\\simeq 500 MeV/c^2$." +"---\nabstract: 'Recent advances in artificial intelligence make it progressively hard to distinguish between genuine and counterfeit media, especially images and videos. One recent development is the rise of deepfake videos, based on manipulating videos using advanced machine learning techniques. This involves replacing the face of an individual from a source video with the face of a second person, in the destination video. This idea is becoming progressively refined as deepfakes are getting progressively seamless and simpler to compute. Combined with the outreach and speed of social media, deepfakes could easily fool individuals when depicting someone saying things that never happened and thus could persuade people in believing fictional scenarios, creating distress, and spreading fake news. In this paper, we examine a technique for possible identification of deepfake videos. We use Euler video magnification which applies spatial decomposition and temporal filtering on video data to highlight and magnify hidden features like skin pulsation and subtle motions. Our approach uses features extracted from the Euler technique to train three models to classify counterfeit and unaltered videos and compare the results with existing techniques.'\nauthor:\n- |\n Rashmiranjan Das\u00a0$^1$, Gaurav Negi\u00a0$^1$ and Alan F. Smeaton\u00a0$^{1,2}$\\\n $^1$School of Computing and" +"---\nabstract: 'The spectral energy distributions (SEDs) of some blazars exhibit an ultraviolet (UV) and/or soft X-ray excess, which can be modelled with different radiation mechanisms. Polarization measurements of the UV/X-ray emission from blazars may provide new and unique information about the astrophysical environment of blazar jets, and could thus help to distinguish between different emission scenarios. In this paper, a new Monte-Carlo code \u2013 MAPPIES (Monte-Carlo Applications for Partially Polarized Inverse External-Compton Scattering) \u2013 for polarization-dependent Compton scattering is used to simulate the polarization signatures in a model where the UV/soft X-ray excess arises from the bulk Compton process. Predictions of the expected polarization signatures of Compton emission from the soft X-ray excess in the SED of AO 0235+164, and the UV excess in the SED of 3C 279 are made for upcoming and proposed polarimetry missions.'\nauthor:\n- 'Lent[\u00e9]{} Dreyer'\n- 'Markus B[\u00f6]{}ttcher'\nbibliography:\n- 'bib.bib'\ntitle: 'Monte-Carlo Applications for Partially Polarized Inverse External-Compton Scattering (MAPPIES) II - Application to the UV/Soft X-ray Excess in Blazar Spectra'\n---\n\nIntroduction {#sec:INTRO}\n============\n\nActive galactic nuclei (AGNs) are some of the most luminous objects in the universe. About 10% of AGNs are observed to host relativistic jets, which are considered" +"---\nabstract: |\n Crimean-Congo haemorrhagic fever (CCHF) is a tick-borne zoonotic disease caused by the Crimean-Congo hemorrhagic fever virus (CCHFV). Ticks belonging to the genus *Hyalomma* are the main vectors and reservoir for the virus. It is maintained in nature in an endemic vertebrate-tick-vertebrate cycle. CCHFV is prevalent in wide geographical areas including Asia, Africa, South-Eastern Europe and the Middle East. Over the last decade, several outbreaks of CCHFV have been observed in Europe, mainly in Mediterranean countries. Due to the high case/fatality ratio of CCHFV in human sometimes, it is of great importance for public health. Climate change and the invasion of CCHFV vectors in Central Europe suggest that the establishment of the transmission in Central Europe may be possible in future.\n\n We developed a compartment-based nonlinear Ordinary Differential Equation (ODE) system to model the disease transmission cycle including blood sucking ticks, livestock and human. Sensitivity analysis of the basic reproduction number $R_0$ shows that decreasing in the tick survival time is an efficient method to eradicate the disease. The model supports us in understanding the influence of different model parameters on the spread of CCHFV. Tick to tick transmission through co-feeding and the CCHFV circulation through trasstadial and" +"---\nabstract: 'This paper focuses on social cloud formation, where agents are involved in a closeness-based conditional resource sharing and build their resource sharing network themselves. The objectives of this paper are: (1) to investigate the impact of agents\u2019 decisions of link addition and deletion on their local and global resource availability, (2) to analyze spillover effects in terms of the impact of link addition between a pair of agents on others\u2019 utility, (3) to study the role of agents\u2019 closeness in determining what type of spillover effects these agents experience in the network, and (4) to model the choices of agents that suggest with whom they want to add links in the social cloud. The findings include the following. Firstly, agents\u2019 decision of link addition (deletion) increases (decreases) their local resource availability. However, these observations do not hold in the case of global resource availability. Secondly, in a connected network, agents experience either positive or negative spillover effect and there is no case with no spillover effects. Agents observe no spillover effects if and only if the network is disconnected and consists of more than two components (sub-networks). Furthermore, if there is no change in the closeness of an" +"---\nabstract: 'The current work is motivated by the need for robust statistical methods for precision medicine; we pioneer the concept of a sequential, adaptive design for a single individual. As such, we address the need for statistical methods that provide actionable inference for a single unit at any point in time. Consider the case that one observes a single time-series, where at each time $t$, one observes a data record $O(t)$ involving treatment nodes $A(t)$, an outcome node $Y(t)$, and time-varying covariates $W(t)$. We aim to learn an optimal, unknown choice of the controlled components of the design in order to optimize the expected outcome; with that, we adapt the randomization mechanism for future time-point experiments based on the data collected on the individual over time. Our results demonstrate that one can learn the optimal rule based on a single sample, and thereby adjust the design at any point $t$ with valid inference for the mean target parameter. This work provides several contributions to the field of statistical precision medicine. First, we define a general class of averages of conditional causal parameters defined by the current context (\u201ccontext-specific\u201d) for the single unit time-series data. We define a nonparametric model" +"---\nabstract: 'In recent years, the implications of the generalized (GUP) and extended (EUP) uncertainty prin-ciples on Maxwell-Boltzmann distribution have been widely investigated. However, at high energy regimes, the validity of Maxwell-Boltzmann statistics is under debate and instead, the J\u00fcttner distribution is proposed as the distribution function in relativistic limit. Motivated by these considerations, in the present work, our aim is to study the effects of GUP and EUP on a system that obeys the J\u00fcttner distribution. To achieve this goal, we address a method to get the distribution function by starting from the partition function and its relation with thermal energy which finally helps us in finding the corresponding energy density states.'\naddress: |\n $^1$ Research Institute for Astronomy and Astrophysics of Maragha (RIAAM), University of Maragheh, Maragheh P.O. Box 55136-553, Iran\\\n $^2$ Department of Physics, Faculty of Sciences, Yasouj University, Yasouj 75918-74934 Iran\nauthor:\n- 'Hooman Moradpour$^1$[^1], Sara Aghababaei$^2$[^2], Amir Hadi Ziaie$^1$[^3]'\ntitle: A Note on Effects of Generalized and Extended Uncertainty Principles on J\u00fcttner Gas\n---\n\nIntroduction\n============\n\nA general prediction of any quantum gravity theory is the possibility of the existence of a minimal length in nature, known as the Planck length, below which no other" +"---\nabstract: |\n Accurate and robust prediction of patient\u2019s response to drug treatments is critical for developing precision medicine. However, it is often difficult to obtain a sufficient amount of coherent drug response data from patients directly for training a generalized machine learning model. Although the utilization of rich cell line data provides an alternative solution, it is challenging to transfer the knowledge obtained from cell lines to patients due to various confounding factors. Few existing transfer learning methods can reliably disentangle common intrinsic biological signals from confounding factors in the cell line and patient data. In this paper, we develop a Coherent Deconfounding Autoencoder (CODE-AE) that can extract both common biological signals shared by incoherent samples and private representations unique to each data set, transfer knowledge learned from cell line data to tissue data, and separate confounding factors from them. Extensive studies on multiple data sets demonstrate that CODE-AE significantly improves the accuracy and robustness over state-of-the-art methods in both predicting patient drug response and de-confounding biological signals. Thus, CODE-AE provides a useful framework to take advantage of *in vitro* omics data for developing generalized patient predictive models. The source code is available at https://github.com/XieResearchGroup/CODE-AE.\\\n \\\n **Contact:** lxie@iscb.org\\\nauthor:" +"---\nabstract: 'Sleep disorders are very widespread in the world population and suffer from a generalized underdiagnosis, given the complexity of their diagnostic methods. Therefore, there is an increasing interest in developing simpler screening methods. A pulse oximeter is an ideal device for sleep disorder screenings since it is a portable, low-cost and accessible technology. This device can provide an estimation of the heart rate (HR), which can be useful to obtain information regarding the sleep stage. In this work, we developed a network architecture with the aim of classifying the sleep stage in awake or asleep using only HR signals from a pulse oximeter. The proposed architecture has two fundamental parts. The first part has the objective of obtaining a representation of the HR by using temporal convolutional networks. Then, the obtained representation is used to feed the second part, which is based on transformers, a model built solely with attention mechanisms. Transformers are able to model the sequence, learning the transition rules between sleep stages. The performance of the proposed method was evaluated on Sleep Heart Health Study dataset, composed of $5000$ heathy and pathological subjects. The dataset was split into three subsets: $2500$ for training, $1250$ for" +"---\nabstract: 'We study the energy distribution during the emergence of a quasi-equilibrium ([QE]{}) state in the course of relaxation to equipartition in slow-fast Hamiltonian systems. A bead-spring model where beads (masses) are connected by springs is considered, and it is used as a model of polymers. The [QE]{}lasts for a long time because the energy exchange between the high-frequency vibrational and other motions is prevented when springs in the molecule become stiff. We numerically calculated the time-averaged kinetic energy and found that the kinetic energy of the solvent particles was always higher than that of the bead in a molecule. This is explained by adapting the equipartition theorem in [QE]{}, and it agrees well with the numerical results. The energy difference can help determine how far the system is from achieving equilibrium, and it can be used as an indicator of the number of frozen or inactive degrees exist in the molecule.'\nauthor:\n- Tatsuo Yanagita\n- Tetsuro Konishi\nbibliography:\n- 'refchain.bib'\ntitle: |\n Emergence of Quasi-equilibrium State and Energy Distribution\\\n for the Beads-spring Molecule Interacting with a Solvent\n---\n\nIntroduction {#sec:intro}\n============\n\nRelaxation to equipartition in Hamiltonian dynamical systems is a long-standing problem that has been extensively studied;" +"---\nabstract: 'LISA and Taiji are expected to form a space-based gravitational-wave (GW) detection network in the future. In this work, we make a forecast for the cosmological parameter estimation with the standard siren observation from the LISA-Taiji network. We simulate the standard siren data based on a scenario with configuration angle of $40^{\\circ}$ between LISA and Taiji. Three models for the population of massive black hole binary (MBHB), i.e., pop III, Q3d, and Q3nod, are considered to predict the events of MBHB mergers. We find that, based on the LISA-Taiji network, the number of electromagnetic (EM) counterparts detected is almost doubled compared with the case of single Taiji mission. Therefore, the LISA-Taiji network\u2019s standard siren observation could provide much tighter constraints on cosmological parameters. For example, solely using the standard sirens from the LISA-Taiji network, the constraint precision of $H_0$ could reach $1.3\\%$. Moreover, combined with the CMB data, the GW-EM observation based on the LISA-Taiji network could also tightly constrain the equation of state of dark energy, e.g., the constraint precision of $w$ reaches about $4\\%$, which is comparable with the result of CMB+BAO+SN. It is concluded that the GW standard sirens from the LISA-Taiji network will become" +"---\nbibliography:\n- 'myrefs.bib'\n---\n\n[**Abstract**]{}: A basis expansion with regularization methods is much appealing to the flexible or robust nonlinear regression models for data with complex structures. When the underlying function has inhomogeneous smoothness, it is well known that conventional reguralization methods do not perform well. In this case, an adaptive procedure such as a free-knot spline or a local likelihood method is often introduced as an effective method. However, both methods need intensive computational loads. In this study, we consider a new efficient basis expansion by proposing a smoothly varying regularization method which is constructed by some special penalties. We call them adaptive-type penalties. In our modeling, adaptive-type penalties play key rolls and it has been successful in giving good estimation for inhomogeneous smoothness functions. A crucial issue in the modeling process is the choice of a suitable model among candidates. To select the suitable model, we derive an approximated generalized information criterion (GIC). The proposed method is investigated through Monte Carlo simulations and real data analysis. Numerical results suggest that our method performs well in various situations.\n\n[**Keywords**]{}: basis expansion, curve and surface fitting, information criterion, model selection, smoothness, tuning parameter\n\nIntroduction\n============\n\nRecently, nonlinear regression models" +"---\nauthor:\n- Adam Hall\nbibliography:\n- 'EllipseCombiningWithUnknownCrossEllipseCorrelations.bib'\ntitle: Ellipse Combining with Unknown Cross Ellipse Correlations\n---\n\nWe discuss the combining of measurements where single measurement covariances are given but the joint measurement covariance is unknown. For this paper we assume the mapping of a single measurement to the solution space is the identity matrix. We examine the solution when it is assumed all measurements are uncorrelated. We then present a way to parameter joint measurement covariance based on pairwise correlation coefficients. Finally, we discuss how to use this parameterization to combine the measurements.\n\nIntroduction {#introduction .unnumbered}\n============\n\nIn this paper, we are going to examine the linear combining or fusion of location estimates. Each estimate is a vector of size $k\\times 1$ and for each estimate we are also provided a $k\\times k$ covariance matrix. We will assume that the overall joint system covariance which describes the correlation between location estimates is not provided. Specifically, we consider the generalized least squares problem of finding an estimate $\\hat{x}$ given the following model $$y=Ax+\\epsilon,$$ where $y$ is a column stacked vector of $n$, $k \\times 1$ estimates , each $k\\times1$ vector is denoted as $y_i$ $$y=\\left[\\begin{matrix}y_{1} \\\\\\\\\\ \\vdots \\\\\\\\ y_{n}\\end{matrix}\\right]$$ and" +"---\nabstract: 'This paper develops a conservation-based approach to model traffic dynamics and alleviate traffic congestion in a network of interconnected roads (NOIR). We generate a NOIR by using the Simulation of Urban Mobility (SUMO) software based on the real street map of Philadelphia Center City. The NOIR is then represented by a directed graph with nodes identifying distinct streets in the Center City area. By classifying the streets as inlets, outlets, and interior nodes, the model predictive control (MPC) method is applied to alleviate the network traffic congestion by optimizing the traffic inflow and outflow across the boundary of the NOIR with consideration of the inner traffic dynamics as a stochastic process. The proposed boundary control problem is defined as a quadratic programming problem with constraints imposing the feasibility of traffic coordination, and a cost function defined based on the traffic density across the NOIR.'\nauthor:\n- 'Xun Liu and Hossein Rastgoftar$^{2}$[^1]'\nbibliography:\n- 'reference.bib'\ntitle: '**Conservation-Based Modeling and Boundary Control of Congestion with an Application to Traffic Management in Center City Philadelphia**'\n---\n\nIntroduction\n============\n\nIn the process of urbanization and the rapid popularization of private vehicles, the problem of urban traffic congestion has become more and more" +"---\nabstract: 'Banding artifacts are artificially-introduced contours arising from the quantization of a smooth region in a video. Despite the advent of recent higher quality video systems with more efficient codecs, these artifacts remain conspicuous, especially on larger displays. In this work, a comprehensive subjective study is performed to understand the dependence of the banding visibility on encoding parameters and dithering. We subsequently develop a simple and intuitive no-reference banding index called CAMBI (Contrast-aware Multiscale Banding Index) which uses insights from Contrast Sensitivity Function in the Human Visual System to predict banding visibility. CAMBI correlates well with subjective perception of banding while using only a few visually-motivated hyperparameters.'\nauthor:\n- \nbibliography:\n- 'citations.bib'\ntitle: 'CAMBI: Contrast-aware Multiscale Banding Index'\n---\n\nIntroduction {#intro}\n============\n\nBanding artifacts are staircase-like contours introduced during the quantization of spatially smooth-varying signals, and exacerbated in the encoding of the video. These artifacts are visible in large, smooth regions with small gradients, and present in scenes containing sky, ocean, dark scenes, sunrise, animations, etc. Banding detection is essentially a problem of detecting artificially introduced contrast in a video. Even with high resolution and bit-depth content being viewed on high-definition screens, banding artifacts are prominent and tackling them" +"---\nabstract: 'Place recognition is critical for both offline mapping and online localization. However, current single-sensor based place recognition still remains challenging in adverse conditions. In this paper, a heterogeneous measurements based framework is proposed for long-term place recognition, which retrieves the query radar scans from the existing lidar maps. To achieve this, a deep neural network is built with joint training in the learning stage, and then in the testing stage, shared embeddings of radar and lidar are extracted for heterogeneous place recognition. To validate the effectiveness of the proposed method, we conduct tests and generalization experiments on the multi-session public datasets compared to other competitive methods. The experimental results indicate that our model is able to perform multiple place recognitions: lidar-to-lidar, radar-to-radar and radar-to-lidar, while the learned model is trained only once. We also release the source code publicly: .'\nauthor:\n- \nbibliography:\n- 'test.bib'\ntitle: 'Radar-to-Lidar: Heterogeneous Place Recognition via Joint Learning'\n---\n\nIntroduction\n============\n\nPlace recognition is a basic technique for both field robots in the wild and automated vehicles on the road, which helps the agent to recognize revisited places when travelling. In the mapping session or Simultaneous Localization and Mapping (SLAM), place recognition is" +"---\nabstract: 'The magnetic field $h_z$ of a moving Pearl vortex in a superconducting thin-film in $(x,y)$ plane is studied with the help of time-dependent London equation. It is found that for a vortex at the origin moving in $+x$ direction, $h_z(x,y)$ is suppressed in front of the vortex, $x>0$, and enhanced behind ($x<0$). The distribution asymmetry is proportional to the velocity and to the conductivity of normal quasiparticles. The vortex self-energy and the interaction of two moving vortices are evaluated.'\nauthor:\n- 'V. G. Kogan'\n- 'N. Nakagawa'\ndate: 'published in Condens. Matter 2021, 6, 4'\ntitle: 'Moving Pearl vortices in thin-film superconductors'\n---\n\nIntroduction\n============\n\nThe time-dependent Ginzburg-Landau equations (GL) are the major tool in modeling vortex motion. Although this approach, strictly speaking, is applicable only for gapless systems near the critical temperature [@Kopnin-Gor'kov], it reproduces qualitatively major features of the vortex motion.\n\nA much simpler London approach had been successfully employed through the years to describe static or nearly static vortex systems. The London equations express the basic Meissner effect and can be used at any temperature for problems where vortex cores are irrelevant. The magnetic structure of moving vortices is commonly considered the same as that" +"---\nabstract: 'Monitoring of cardiovascular activity is highly desired and can enable novel applications in diagnosing potential cardiovascular diseases and maintaining an individual\u2019s well-being. Currently, such vital signs are measured using intrusive contact devices such as an electrocardiogram (ECG), chest straps, and pulse oximeters that require the patient or the health provider to manually implement. User engagement and compliance with wearables is a well-known problem that presents a significant barrier to capturing the continuous measurements needed for health monitoring. Non-contact, device-free human sensing methods can eliminate the need for specialized heart and blood pressure monitoring equipment. Non-contact methods can have additional advantages since they are scalable with any environment where video can be captured, can be used for continuous measurements, and can be used on patients with varying levels of dexterity and independence, from people with physical impairments to infants (e.g., baby camera). In this paper, we used a non-contact method that only requires face videos recorded using commercially-available webcams. These videos were exploited to predict the health attributes like pulse rate and variance in pulse rate. The proposed approach used facial recognition to detect the face in each frame of the video using facial landmarks, followed by supervised learning" +"---\nabstract: 'Astrophysical time series often contain periodic signals. The large and growing volume of time series data from photometric surveys demands computationally efficient methods for detecting and characterizing such signals. The most efficient algorithms available for this purpose are those that exploit the ${\\mathcal{O}}(N\\log N)$ scaling of the Fast Fourier Transform (FFT). However, these methods are not optimal for non-sinusoidal signal shapes. Template fits (or periodic matched filters) optimize sensitivity for *a priori* known signal shapes but at a significant computational cost. Current implementations of template periodograms scale as ${\\mathcal{O}}(N_f N_{\\rm obs})$, where $N_f$ is the number of trial frequencies and $N_{\\rm obs}$ is the number of lightcurve observations, and due to non-convexity, they do not guarantee the best fit at each trial frequency, which can lead to spurious results. In this work, we present a non-linear extension of the Lomb-Scargle periodogram to obtain a template-fitting algorithm that is both accurate (globally optimal solutions are obtained except in pathological cases) and computationally efficient (scaling as ${\\mathcal{O}}(N_f\\log N_f)$ for a given template). The non-linear optimization of the template fit at each frequency is recast as a polynomial zero-finding problem, where the coefficients of the polynomial can be computed efficiently with" +"---\nabstract: 'The combination of fast propagation speeds and highly localized nature has hindered the direct observation of the evolution of shock waves at the molecular scale. To address this limitation, an experimental system is designed by tuning a one-dimensional magnetic lattice to evolve benign wave forms into shock waves at observable spatial and temporal scales, thus serving as a \u2018magnifying glass\u2019 to illuminate shock processes. An accompanying analysis confirms that the formation of strong shocks is fully captured. The exhibited lack of a steady state induced by indefinite expansion of a disordered transition zone points to the absence of local thermodynamic equilibrium, and resurfaces lingering questions on the validity of continuum assumptions in presence of strong shocks.'\nauthor:\n- Jian Li\n- S Chockalingam\n- Tal Cohen\nbibliography:\n- 'bibliography.bib'\ntitle: 'Observation of ultra-slow shock waves in a tunable magnetic lattice'\n---\n\nThe propagation of shock waves in solids has received enormous attention in the last several decades [@mcqueen1960equation; @holian1998plasticity; @yao2019high; @simmons2020quantum]. Experiments, molecular dynamic simulations, and continuum mechanics modeling, have been performed to investigate shock waves [@catheline2003observation; @espindola2017shear; @chockalingam2020shear; @Ramesh2008] and their interactions with complex material response such as plasticity [@chen2006dynamic], damage [@fensin2014dynamic], dislocation and twinning [@higginbotham2013molecular; @wehrenberg2017situ;" +"---\nabstract: 'While rich medical datasets are hosted in [hospitals]{}distributed across the world, concerns on [patients]{}\u2019 privacy is a barrier against using such data to train deep neural networks\u00a0(DNNs) for medical diagnostics. We propose [**]{}, a system to train DNNs on distributed datasets, which employs federated learning\u00a0(FL) with differentially-private stochastic gradient descent\u00a0(DPSGD), and, in combination with secure aggregation, can establish a better trade-off between differential privacy\u00a0(DP) guarantee and DNN\u2019s accuracy than other approaches. Results on a diabetic retinopathy\u00a0(DR) task show that [**]{}provides a DP guarantee close to the centralized training counterpart, while achieving a better classification accuracy than FL with parallel DP where DPSGD is applied without coordination. Code is available at\u00a0[]{}.'\nauthor:\n- |\n Mohammad Malekzadeh, Burak Hasircioglu, Nitish Mital, Kunal Katarya,\\\n Mehmet Emre Ozfatura, Deniz G\u00fcnd\u00fcz[^1]\\\ntitle: 'Dopamine: Differentially Private Federated Learning on Medical Data'\n---\n\n[10]{}(1,1) The Second AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI-21)\n\nIntroduction\n============\n\nDeep neural networks facilitate disease recognition from medical data, particularly for [patients]{}without immediate access to doctors. Medical images are processed with DNNs for faster diagnosis of skin disease\u00a0[@skincancer], lung cancer\u00a0[@lungcancer], or diabetic retinopathy\u00a0(DR)\u00a0[@10.1001/jama.2016.17216]. However, the memorization capacity of DNNs can" +"---\nauthor:\n- 'Rahul Kothari,'\n- Roy Maartens\nbibliography:\n- 'bibliography.bib'\ntitle: Lensing contribution to the 21cm intensity bispectrum\n---\n\nIntroduction\\[intro\\]\n=====================\n\nThe cosmic microwave background (CMB) has been [an]{} invaluable [probe]{} for developing and testing cosmological models. Its main constraining power [comes]{} from the primary anisotropies that are imprinted at $z\\sim 1000$. [In addition to this,]{} it also contributes to low-redshift constraints via the lensing of the CMB temperature by large-scale structure [@Aghanim:2018oex]. The integrated 21cm emission from neutral hydrogen (HI) in the post-reionisation era produces maps that are qualitatively similar to the CMB, but with multiple maps over a range of redshifts. 21cm intensity maps are also lensed by intervening large-scale structure. For surveys that detect individual galaxies, the lensing effect on number density occurs at first order in perturbations and modifies the tree-level power spectrum. In the case of the CMB and 21cm intensity mapping, the first-order lensing effect vanishes due to conservation of surface brightness [@Hall:2012wd; @Alonso:2015uua]: the lensing effect in the CMB and 21cm intensity arises at second order. As a result, the 21cm power spectrum is only affected at 1-loop level [@Umeh:2015gza; @Jalivand:2018vfz]. By contrast, the tree-level 21cm bispectrum does carry an imprint of" +"---\nabstract: 'We develop the concept of quasi-phasematching (QPM) by implementing it in the recently proposed Josephson traveling-wave parametric amplifier (JTWPA) with three-wave mixing (3WM). The amplifier is based on a ladder transmission line consisting of flux-biased radio-frequency SQUIDs whose nonlinearity is of $\\chi^{(2)}$-type. QPM is achieved in the 3WM process, $\\omega_p=\\omega_s+\\omega_i$ (where $\\omega_p$, $\\omega_s$, and $\\omega_i$ are the pump, signal, and idler frequencies, respectively) due to designing the JTWPA to include periodically inverted groups of these SQUIDs that reverse the sign of the nonlinearity. Modeling shows that the JTWPA bandwidth is relatively large (ca. $0.4\\omega_p$) and flat, while unwanted modes, including $\\omega_{2p}=2\\omega_p$, $\\omega_+=\\omega_p +\\omega_s$, $\\omega_- = 2\\omega_p - \\omega_s$, etc., are strongly suppressed with the help of engineered dispersion.'\nauthor:\n- 'A. B. Zorin'\ndate: 'May 15, 2021'\ntitle: 'Quasi-phasematching in a poled Josephson traveling-wave parametric amplifier with three-wave mixing'\n---\n\nDue to vanishingly small losses and an ultimately quantum level of internal noise, [@Louisell1961] cryogenic traveling-microwave parametric amplifiers based on the kinetic inductance of superconducting wires [@Eom2012; @Vissers2016; @Malnou2021] and Josephson junctions [@Macklin2015; @White2015; @Planat2020; @Ranadive2021] are considered highly useful quantum devices that can be applied in precision quantum measurements, photon detection, quantum communication and quantum computing. [@Wallraff2004;" +"---\nabstract: 'The radiation mechanisms responsible for the multiwavelength emission from relativistic jet sources are poorly understood. The modelling of the spectral energy distributions (SEDs) and light curves alone is not adequate to distinguish between existing models. Polarisation in the $X$-ray and $\\gamma$-ray regime of these sources may provide new and unique information about the jet physics and radiation mechanisms. Several upcoming projects will be able to deliver polarimetric measurements of the brightest $X$-ray sources, including active galactic nuclei (AGN) jets and $\\gamma$-ray bursts (GRBs). This article describes the development of a new Monte-Carlo code \u2013 MAPPIES (Monte-Carlo Applications for Partially Polarised Inverse External-Compton Scattering) \u2013 for polarisation-dependent Compton scattering in relativistic jet sources. Generic results for Compton polarisation in the Thomson and Klein-Nishina regimes are presented.'\nauthor:\n- 'Lent[\u00e9]{} Dreyer'\n- 'Markus B[\u00f6]{}ttcher'\nbibliography:\n- 'REF.bib'\ntitle: 'Monte-Carlo Applications for Partially Polarised Inverse External-Compton Scattering (MAPPIES) - I. Description of the code and First Results'\n---\n\nIntroduction {#sec:INTRO}\n============\n\nThe radiation from jetted astrophysical sources (e.g. active galactic nuclei (AGNs) and $\\gamma$-ray bursts (GRBs)) is characterised by their spectral energy distribution (SED) which can be modelled in many different ways, whilst being consistent with the spectral shape of the" +"---\nabstract: 'Graph contrastive learning algorithms have demonstrated remarkable success in various applications such as node classification, link prediction, and graph clustering.'\nauthor:\n- |\n Kaili Ma klma@cse.cuhk.edu.hk\\\n Department of Computer Science and Engineering,\\\n The Chinese University of Hong Kong Garry Yang hcyang@cse.cuhk.edu.hk\\\n Department of Computer Science and Engineering,\\\n The Chinese University of Hong Kong Han Yang hyang@cse.cuhk.edu.hk\\\n Department of Computer Science and Engineering,\\\n The Chinese University of Hong Kong Yongqiang Chen yqchen@cse.cuhk.edu.hk\\\n Department of Computer Science and Engineering,\\\n The Chinese University of Hong Kong James Cheng jcheng@cse.cuhk.edu.hk\\\n Department of Computer Science and Engineering,\\\n The Chinese University of Hong Kong\\\nbibliography:\n- 'main.bib'\ntitle: Calibrating and Improving Graph Contrastive Learning\n---\n\nIntroduction\n============\n\nGraph structures are widely used to capture abundant information, such as hierarchical configurations and community structures, present in data from various domains like social networks, e-commerce networks, knowledge graphs, the World Wide Web, and semantic webs. By incorporating graph topology along with node and edge attributes into machine learning frameworks, graph representation learning has demonstrated remarkable success in numerous essential applications, such as node classification, link prediction, and graph clustering.\n\nAlthough graph contrastive algorithms have demonstrated strong performance in some downstream tasks, we have discovered that directly" +"---\nabstract: 'Deep neural networks (DNNs) used for brain-computer-interface (BCI) classification are commonly expected to learn general features when trained across a variety of contexts, such that these features could be fine-tuned to specific contexts. While some success is found in such an approach, we suggest that this interpretation is limited and an alternative would better leverage the newly (publicly) available massive EEG datasets. We consider how to adapt techniques and architectures used for language modelling (LM), that appear capable of ingesting awesome amounts of data, towards the development of encephalography modelling (EM) with DNNs in the same vein. We specifically adapt an approach effectively used for automatic speech recognition, which similarly (to LMs) uses a self-supervised training objective to learn compressed representations of raw data signals. After adaptation to EEG, we find that a single pre-trained model is capable of modelling completely novel raw EEG sequences recorded with differing hardware, and different subjects performing different tasks. Furthermore, both the internal representations of this model and the entire architecture can be fine-tuned to a *variety* of downstream BCI and EEG classification tasks, outperforming prior work in more *task-specific* (sleep stage classification) self-supervision.'\nauthor:\n- |\n Demetres Kostas\\\n University of Toronto," +"---\nauthor:\n- 'J. Fernando Barbero G.,'\n- 'Bogar D\u00edaz,'\n- 'Juan Margalef-Bentabol'\n- 'and Eduardo J. S. Villase\u00f1or'\ntitle: 'Hamiltonian Gotay-Nester-Hinds analysis of the parametrized unimodular extension of the Holst action '\n---\n\nIntroduction\n============\n\n[\\[sec\\_introduction\\]]{}\n\nAlthough less popular than the Dirac algorithm [@Dirac], the Gotay-Nester-Hinds (GNH) approach to the Hamiltonian formulation of mechanical systems and field theories defined by singular Lagrangians is very powerful and conceptually clean [@GNH1; @GNH2; @GNH3; @BPV; @margalef2018thesis]. Its geometric underpinnings provide a rigorous viewpoint that avoids many of the drawbacks of Dirac\u2019s method \u2013in particular when applied to field theories\u2013 while ultimately giving the same basic information. Several differences between both approaches should be noted:\n\n- Dirac\u2019s method relies heavily on the language of classical mechanics. For instance, singular Lagrangian systems are characterized as those for which it is impossible to write all the velocities in terms of momenta; this leads to the ensuing appearance of constraints and the need to enforce their stability in order to guarantee the consistency of time evolution. The GNH method, on the other hand, is based on geometry. For instance, the notion of dynamical stability is translated into the requirement that the vector fields that encode the" +"---\nabstract: 'Machines, from artificially intelligent digital assistants to embodied robots, are becoming more pervasive in everyday life. Drawing on feminist science and technology studies (STS) perspectives, we demonstrate how machine designers are not just crafting neutral objects, but relationships between machines and humans that are entangled in human social issues such as gender and power dynamics. Thus, in order to create a more ethical and just future, the dominant assumptions currently underpinning the design of these human-machine relations must be challenged and reoriented toward relations of justice and inclusivity. This paper contributes the \u201csocial machine\u201d as a model for technology designers who seek to recognize the importance, diversity and complexity of the social in their work, *and* to engage with the agential power of machines. In our model, the social machine is imagined as a potentially equitable relationship partner that has agency and as an \u201cother\u201d that is distinct from, yet related to, humans, objects, and animals. We critically examine and contrast our model with tendencies in robotics that consider robots as tools, human companions, animals or creatures, and/or slaves. In doing so, we demonstrate ingrained dominant assumptions about human-machine relations and reveal the challenges of radical thinking in" +"---\nabstract: 'Recovery of the causal structure of dynamic networks from noisy measurements has long been a problem of intense interest across many areas of science and engineering. Many algorithms have been proposed, but there is no work that compares the performance of the algorithms to converse bounds in a non-asymptotic setting. As a step to address this problem, this paper gives lower bounds on the error probability for causal network support recovery in a linear Gaussian setting. The bounds are based on the use of the Bhattacharyya coefficient for binary hypothesis testing problems with mixture probability distributions. Comparison of the bounds and the performance achieved by two representative recovery algorithms are given for sparse random networks based on the Erd\u0151s\u2013R\u00e9nyi model.'\nauthor:\n- \ntitle: |\n Lower Bounds on Information Requirements for Causal Network Inference\\\n [^1]\n---\n\nIntroduction\n============\n\nCausal networks refer to the directed graphs representing the causal relationships among a number of entities, and the inference of sparse large-scale causal networks is of great importance in many scientific, engineering, and medical fields. For example, the study of gene regulatory networks in biology concerns the causal interactions between genes and is vital for finding pathways of biological functions. Because" +"---\nabstract: 'Given an (optimal) dynamic treatment rule, it may be of interest to evaluate that rule \u2013 that is, to ask the causal question: what is the expected outcome had every subject received treatment according to that rule? In this paper, we study the performance of estimators that approximate the true value of: 1) an *a priori* known dynamic treatment rule 2) the true, unknown optimal dynamic treatment rule (ODTR); 3) an estimated ODTR, a so-called \u201cdata-adaptive parameter,\" whose true value depends on the sample. Using simulations of point-treatment data, we specifically investigate: 1) the impact of increasingly data-adaptive estimation of nuisance parameters and/or of the ODTR on performance; 2) the potential for improved efficiency and bias reduction through the use of semiparametric efficient estimators; and, 3) the importance of sample splitting based on CV-TMLE for accurate inference. In the simulations considered, there was very little cost and many benefits to using the cross-validated targeted maximum likelihood estimator (CV-TMLE) to estimate the value of the true and estimated ODTR; importantly, and in contrast to non cross-validated estimators, the performance of CV-TMLE was maintained even when highly data-adaptive algorithms were used to estimate both nuisance parameters and the ODTR. In" +"---\nabstract: |\n We introduce a homogeneous multigrid method in the sense that it uses the same embedded discontinuous Galerkin (EDG) discretization scheme for Poisson\u2019s equation on all levels. In particular, we use the injection operator developed in [@LuRK2020] for HDG and prove optimal convergence of the method under the assumption of elliptic regularity. Numerical experiments underline our analytical findings.\\\n Keywords. Multigird method, embedded discontinuous Galerkin, Poisson equation.\naddress:\n- 'Department of Mathematics Sciences, Soochow University, Suzhou, 215006, China'\n- 'Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University, Mathematikon, Im Neuenheimer Feld 205, 69120 Heidelberg, Germany'\n- 'Interdisciplinary Center for Scientific Computing (IWR) and Mathematics Center Heidelberg (MATCH), Heidelberg University, Mathematikon, Im Neuenheimer Feld 205, 69120 Heidelberg, Germany'\nauthor:\n- Peipei Lu\n- Andreas Rupp\n- Guido Kanschat\nbibliography:\n- 'MultigridEDG.bib'\ntitle: Homogeneous multigrid for embedded discontinuous Galerkin methods\n---\n\n[^1]\n\n[^2]\n\nIntroduction\n============\n\nAs described in [@CockburnGSS2009], the embedded discontinuous Galerkin (EDG) method can be obtained from the hybridizable discontinuous Galerkin (HDG) methods by replacing the space for the hybrid unknown by an overall continuous space. Thus, the stiffness matrix is significantly smaller, and its size and sparsity structure coincide with those of the stiffness matrix of the" +"---\nabstract: |\n In this article we consider the estimation of the log-normalization constant associated to a class of continuous-time filtering models. In particular, we consider ensemble Kalman-Bucy filter based estimates based upon several nonlinear Kalman-Bucy diffusions. Based upon new conditional bias results for the mean of the afore-mentioned methods, we analyze the empirical log-scale normalization constants in terms of their $\\mathbb{L}_n-$errors and conditional bias. Depending on the type of nonlinear Kalman-Bucy diffusion, we show that these are of order $(t^{1/2}/N^{1/2}) + t/N$ or $1/N^{1/2}$ ($\\mathbb{L}_n-$errors) and of order $[t+t^{1/2}]/N$ or $1/N$ (conditional bias), where $t$ is the time horizon and $N$ is the ensemble size. Finally, we use these results for online static parameter estimation for above filtering models and implement the methodology for both linear and nonlinear models.\\\n **Keywords**: Kalman-Bucy filter, Riccati equations, nonlinear Markov processes.\n---\n\n[**Log-Normalization Constant Estimation using the Ensemble Kalman-Bucy Filter with Application to High-Dimensional Models**]{}\n\nBY DAN CRISAN$^{1}$, PIERRE DEL MORAL$^{2}$, AJAY JASRA$^{3}$ & HAMZA RUZAYQAT$^{3}$\n\n[$^{1}$Department of Mathematics, Imperial College London, London, SW7 2AZ, UK.]{} [E-Mail:]{} `d.crisan@ic.ac.uk`\\\n[$^{2}$Center INRIA Bordeaux Sud-Ouest & Institut de Mathematiques de Bordeaux, Bordeaux, 33405, FR.]{} [E-Mail:]{} `pierre.del-moral@inria.fr`\\\n[$^{3}$Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah" +"---\nauthor:\n- \n- \n- \n- \nbibliography:\n- 'BHLocalization.bib'\ntitle: 'The Bethe-Ansatz approach to the $\\mathcal N=4$ superconformal index at finite rank'\n---\n\nIntroduction\n============\n\nAn insightful analysis of the superconformal index of ${\\cal N}=4$ maximally supersymmetric Yang-Mills theory with gauge group SU($N$) has recently provided a microscopic foundation for the entropy of electrically charged, rotating, asymptotically AdS$_5$ black holes [@Cabo-Bizet:2018ehj; @Choi:2018hmj; @Benini:2018ywd]. The results are an important improvement on the understanding of the superconformal index previously introduced in [@Romelsberger:2005eg; @Kinney:2005ej] and provide an explicit realization of a conjecture put forward in [@Hosseini:2017mds] regarding the entropy of AdS$_5$ black holes. These developments motivated various studies into the superconformal index of large classes of 4d ${\\cal N}=1$ theories [@Honda:2019cio; @ArabiArdehali:2019tdm; @Kim:2019yrz; @Cabo-Bizet:2019osg; @Amariti:2019mgp; @Lezcano:2019pae; @Lanir:2019abx; @ArabiArdehali:2019orz].\n\nThe problem of microscopic counting of the entropy has thus descended into a technical plane. Two main technical approaches have emerged, one rooted in saddle point approximations [@Cabo-Bizet:2018ehj; @Choi:2018hmj], and one in a Bethe-Ansatz (BA) formula of the index [@Benini:2018ywd]; a systematic discussion comparing both approaches including sub-leading contributions and extending the results to include 4d ${\\cal N}=1$ theories was presented in [@GonzalezLezcano:2020yeb]. Other approaches to the evaluation of the index, include, for example, those" +"---\nauthor:\n- 'L. M. Flor-Torres, R. Coziol, K.-P. Schr\u00f6der, D. Jack, J. H. M. M. Schmitt, and S. Blanco-Cuaresma'\ntitle: |\n Connecting the formation of stars and planets.\\\n I \u2013 Spectroscopic characterization of host stars with TIGRE \n---\n\nIntroduction {#sec:intro}\n============\n\nSince the discovery of the first planet orbiting another star in the 1990s, the number of confirmed exoplanets had steadily increased reaching in November of last year 4133.[^1] The urgent tasks with which we are faced now are determining the compositions of these exoplanets and understanding how they formed. However, although that should have been straightforward [@Seager2010], the detection of new types of planets had complicated the matter, changing in a crucial way our understanding of the formation of planetary systems around stars like the Sun.\n\nThe first new type of planets to be discovered was the \u201chot Jupiters\u201d [HJs; @Mayor1995], which are gas giants like Jupiter and Saturn, but with extremely small periods, $P < 10$\u00a0days, consistent with semi-major axes smaller than $a_p = 0.05$\u00a0AU. The existence of HJs is problematic because according to the model of formation of the solar system they can only form in the protoplanetary disk (PPD) where it is cold" +"---\nbibliography:\n- 'well64.bib'\n---\n\nIntroduction\n============\n\nThe Mersenne-Twister [@matsumoto1998mersenne] is one of the most popular generators of uniform pseudo-random numbers. It is used in many numerical libraries and software.\n\nSeveral variant have been developed over the years, to accommodate some limitations of the original algorithm.\n\nIt has been enhanced to work directly with 64-bit numbers in [@nishimura2000tables]. The same paper also describe a variant which increases the number of non-zero terms in the characteristic polynomial. Although rarely adopted in software, the latter has the tremendous advantage to escape much faster from zeroland. Indeed, the original Mersenne-Twister may require more than 700000 numbers to escape from a state with all bits set to zero, except for one. Although a good initialization scheme helps to minimize the issue, the issue may still unexpectedly appear. We give an example of innocuous initialization array, with many ones and zeros.\n\n@panneton2006improved propose a more general algorithm, based on the same underlying principles, but using more internal block matrices, while keeping a similar performance profile. This allows for maximal equidistribution properties, a feature not possible in the original design. It also permit the use of non Mersenne prime periods.\n\nMore recently, the algorithm has been" +"---\nabstract: '\\[sec:abstract\\] Sequences and time-series often arise in robot tasks, e.g., in activity recognition and imitation learning. In recent years, deep neural networks (DNNs) have emerged as an effective data-driven methodology for processing sequences given sufficient training data and compute resources. However, when data is limited, simpler models such as logic/rule-based methods work surprisingly well, especially when relevant prior knowledge is applied in their construction. However, unlike DNNs, these \u201cstructured\u201d models can be difficult to extend, and do not work well with raw unstructured data. In this work, we seek to learn flexible DNNs, yet leverage prior temporal knowledge when available. Our approach is to embed symbolic knowledge expressed as linear temporal logic (LTL) and use these embeddings to guide the training of deep models. Specifically, we construct semantic-based embeddings of automata generated from LTL formula via a Graph Neural Network. Experiments show that these learnt embeddings can lead to improvements on downstream robot tasks such as sequential action recognition and imitation learning.'\nauthor:\n- |\n Yaqi Xie$^*$, Fan Zhou$^*$, and Harold Soh\\\n Dept. of Computer Science, National University of Singapore.\\\n [`{yaqixie, zhoufan ,harold}@comp.nus.edu.sg`]{} [^1]\nbibliography:\n- 'ref.bib'\ntitle: '**Embedding Symbolic Temporal Knowledge into Deep Sequential Models** '\n---" +"---\nabstract: 'WarpX is a general purpose electromagnetic particle-in-cell code that was originally designed to run on many-core CPU architectures. We describe the strategy, based on the AMReX library, followed to allow WarpX to use the GPU-accelerated nodes on OLCF\u2019s Summit supercomputer, a strategy we believe will extend to the upcoming machines Frontier and Aurora. We summarize the challenges encountered, lessons learned, and give current performance results on a series of relevant benchmark problems.'\naddress:\n- 'Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA'\n- 'SLAC National Accelerator Laboratory Menlo Park, CA 94025, USA'\n- 'Lawrence Livermore National Laboratory, Livermore, CA 94550, USA'\n- 'Deutsches Elektronen Synchrotron (DESY), Hamburg, Hamburg 22607, Germany'\n- 'LIDYL, CEA-Universit\u00e9 Paris-Saclay, CEA Saclay, 91 191 Gif-sur-Yvette, France'\nauthor:\n- 'A. Myers'\n- 'A. Almgren'\n- 'L. D. Amorim'\n- 'J. Bell'\n- 'L. Fedeli'\n- 'L. Ge'\n- 'K. Gott'\n- 'D. P. Grote'\n- 'M. Hogan'\n- 'A. Huebl'\n- 'R. Jambunathan'\n- 'R. Lehe'\n- 'C. Ng'\n- 'M. Rowan'\n- 'O. Shapoval'\n- 'M. Th\u00e9venet'\n- 'J.-L. Vay'\n- 'H. Vincenti'\n- 'E. Yang'\n- 'N. Za\u00efm'\n- 'W. Zhang'\n- 'Y. Zhao'\n- 'E. Zoni'\nbibliography:\n- 'warpx.bib'\ntitle: 'Porting WarpX" +"---\nabstract: 'We report on the first measurement of charm-strange meson $D_s^{\\pm}$ production at midrapidity in Au+Au collisions at $\\sqrt{s_{_{\\rm NN}}}$ = 200 GeV from the STAR experiment. The yield ratio between strange ($D_{s}^{\\pm}$) and non-strange ($D^{0}$) open-charm mesons is presented and compared to model calculations. A significant enhancement, relative to a PYTHIA simulation of $p$+$p$ collisions, is observed in the $D_{s}^{\\pm}/D^0$ yield ratio in Au+Au collisions over a large range of collision centralities. Model calculations incorporating abundant strange-quark production in the quark-gluon plasma (QGP) and coalescence hadronization qualitatively reproduce the data. The transverse-momentum integrated yield ratio of $D_{s}^{\\pm}/D^0$ at midrapidity is consistent with a prediction from a statistical hadronization model with the parameters constrained by the yields of light and strange hadrons measured at the same collision energy. These results suggest that the coalescence of charm quarks with strange quarks in the QGP plays an important role in $D_{s}^{\\pm}$ meson production in heavy-ion collisions.'\nauthor:\n- 'J.\u00a0Adam'\n- 'L.\u00a0Adamczyk'\n- 'J.\u00a0R.\u00a0Adams'\n- 'J.\u00a0K.\u00a0Adkins'\n- 'G.\u00a0Agakishiev'\n- 'M.\u00a0M.\u00a0Aggarwal'\n- 'Z.\u00a0Ahammed'\n- 'I.\u00a0Alekseev'\n- 'D.\u00a0M.\u00a0Anderson'\n- 'A.\u00a0Aparin'\n- 'E.\u00a0C.\u00a0Aschenauer'\n- 'M.\u00a0U.\u00a0Ashraf'\n- 'F." +"---\nabstract: 'The Hong-Ou-Mandel (HOM) effect is analyzed for photons in a modified Mach-Zehnder setup with two particles experiencing different gravitational potentials, which are later recombined using a beam-splitter. It is found that the HOM effect depends directly on the relativistic time dilation between the arms of the setup. This temporal dilation can be used to estimate the $\\gamma$ and $\\beta$ parameters of the parameterized post-Newtonian formalism. The uncertainty in the parameters $\\gamma$ and $\\beta$ are of the order $ 10^{-8}-10^{-12}$, depending on the quantum state employed.'\nauthor:\n- |\n M. Rivera-Tapia$^{1,2^\\ast}$, Marcel I. Y\u00e1\u00f1ez-Reyes$^{3,4,\\dagger}$, A. Delgado$^{1,2}$, and G. Rubilar$^{2}$\\\n $^1$[*Instituto Milenio de Investigaci\u00f3n en \u00d3ptica, Universidad de Concepci\u00f3n, Concepci\u00f3n, Chile.*]{}\\\n $^2$[*Departamento de F\u00edsica, Facultad Ciencias F\u00edsicas y Matem\u00e1ticas, Universidad de Concepci\u00f3n, Concepci\u00f3n, Chile*]{}\\\n $^3$[*ITFA, University of Amsterdam, Science Park 904, 1018 XE, Amsterdam, The Netherlands*]{}\\\n $^4$[*Nikhef Theory Group, Science Park 105, 1098 XG Amsterdam, The Netherlands*]{}\\\n [$^\\ast$`mriverat@udec.cl`, $^{\\dagger}$`marcelyr@nikhef.nl`]{}\nbibliography:\n- 'Bibliography/biblio.bib'\ntitle: 'Outperforming classical estimation of Post-Newtonian parameters of Earth\u2019s gravitational field using quantum metrology'\n---\n\nIntroduction {#SEC1}\n============\n\nGeneral Relativity is a non-linear and metric theory of the gravitational field. The non-linear character of General Relativity has as consequence that very few analytical solutions are known most of" +"---\nabstract: 'Recently, unsupervised local learning, based on Hebb\u2019s idea that change in synaptic efficacy depends on the activity of the pre- and postsynaptic neuron only, has shown potential as an alternative training mechanism to backpropagation. Unfortunately, Hebbian learning remains experimental and rarely makes it way into standard deep learning frameworks. In this work, we investigate the potential of Hebbian learning in the context of standard deep learning workflows. To this end, a framework for thorough and systematic evaluation of local learning rules in existing deep learning pipelines is proposed. Using this framework, the potential of Hebbian learned feature extractors for image classification is illustrated. In particular, the framework is used to expand the Krotov-Hopfield learning rule to standard convolutional neural networks without sacrificing accuracy compared to end-to-end backpropagation. The source code is available at .'\nauthor:\n- |\n Jules Talloen[^1]\\\n Ghent University / ML6\\\n `jules@talloen.eu`\\\n Joni Dambre Alexander Vandesompele\\\n Ghent University - imec\\\n `{joni.dambre, alexander.vandesompele}@ugent.be`\nbibliography:\n- 'main.bib'\ntitle: 'PyTorch-Hebbian: facilitating local learning in a deep learning framework'\n---\n\nIntroduction\n============\n\nIn this work, we study unsupervised local learning rules which rely solely on bottom-up information propagation. Furthermore, we limit the analysis to Hebbian learning by imposing correlated pre-" +"---\nabstract: 'The interaction between a quantum charge and a dynamic source of a magnetic field is considered in the Aharonov-Bohm scenario. It is shown that, in weak interactions with a post-selection of the source, the effective vector potential is, generally, complex-valued. This leads to new experimental protocols to detect the Aharonov-Bohm phase before the source is fully encircled. While this does not necessarily change the nonlocal status of the Aharonov-Bohm effect, it brings new insights into it. Moreover, we discuss how these results might have consequences for the correspondence principle, making complex vector potentials relevant to the study of classical systems.'\nauthor:\n- 'Ismael L. Paiva'\n- Yakir Aharonov\n- Jeff Tollaksen\n- Mordecai Waegell\nbibliography:\n- 'citations.bib'\ntitle: 'Aharonov-Bohm effect with an effective complex-valued vector potential'\n---\n\nIntroduction\n============\n\nThe Aharonov-Bohm (AB) effect\u00a0[@ehrenberg1949refractive; @Aharonov1959; @aharonov1963further] refers to the relative phase $\\phi_{AB} = q\\Phi_B/\\hbar$ acquired by a quantum particle with charge $q$ that encircles, but does not enter, a region with magnetic flux $\\Phi_B$ on its interior. In this scenario, there is a sense in which the charge interacts with the vector potential associated with the magnetic field, which is always non-zero in at least part of the" +"---\nabstract: 'Medical imaging systems are commonly assessed and optimized by use of objective-measures of image quality (IQ) that quantify the performance of an observer at specific tasks. Variation in the objects to-be-imaged is an important source of variability that can significantly limit observer performance. This object variability can be described by stochastic object models (SOMs). In order to establish SOMs that can accurately model realistic object variability, it is desirable to use experimental data. To achieve this, an augmented generative adversarial network (GAN) architecture called AmbientGAN has been developed and investigated. However, AmbientGANs cannot be immediately trained by use of advanced GAN training methods such as the progressive growing of GANs (ProGANs). Therefore, the ability of AmbientGANs to establish realistic object models is limited. To circumvent this, a progressively-growing AmbientGAN (ProAmGAN) has been proposed. However, ProAmGANs are designed for generating two-dimensional (2D) images while medical imaging modalities are commonly employed for imaging three-dimensional (3D) objects. Moreover, ProAmGANs that employ traditional generator architectures lack the ability to control specific image features such as fine-scale textures that are frequently considered when optimizing imaging systems. In this study, we address these limitations by proposing two advanced AmbientGAN architectures: 3D ProAmGANs and Style-AmbientGANs" +"---\nabstract: 'The charge state of an ion provides a simplified electronic picture of the bonding in compounds, and heuristically explains the basic electronic structure of a system. Despite its usefulness, the physical and chemical definition of a charge state is not a trivial one, and the essential idea of electron transfer is found to be not a realistic explanation. Here, we study the real-space charge distribution of a cobalt ion in its various charge and spin states, and examine the relation between the formal charge/spin states and the static charge distribution. Taking the prototypical cobalt oxides, La/SrCoO$_3$, and bulk Co metal, we confirm that no prominent static charge transfer exists for different charge states. However, we show that small variations exist in the integrated charges for different charge states, and these are compared to the various spin state cases.'\nauthor:\n- Bongjae Kim\ntitle: 'Real-space charge distribution of cobalt ion and its relation with charge and spin states\\'\n---\n\nIntroduction\n============\n\nThe charge state is an important identity in chemical and materials physics. The concept of charge state (often, indistinguishably used as oxidation state) pedagogically explains the essence of the ionic bonding and provides a simple description of the" +"---\nabstract: 'Consider a compact surface of genus $\\geq 2$ equipped with a metric that is flat everywhere except at finitely many cone points with angles greater than $2\\pi$. Following the technique in the work of Burns, Climenhaga, Fisher, and Thompson, we prove that sufficiently regular potential functions have unique equilibrium states if the singular set does not support the full pressure. Moreover, we show that the pressure gap holds for any potential which is locally constant on a neighborhood of the singular set. Finally, we establish that the corresponding equilibrium states have the $K$-property, and closed regular geodesics equidistribute.'\nauthor:\n- |\n Benjamin Call, David Constantine, Alena Erchenko,\\\n Noelle Sawyer, Grace Work\nbibliography:\n- 'bibliography.bib'\ntitle: Unique equilibrium states for geodesic flows on flat surfaces with singularities\n---\n\nIntroduction\n============\n\nWe examine the uniqueness of equilibrium states for geodesic flows on a specific class of CAT(0) surfaces, those where the negative curvature is concentrated at a finite set of points. Translation surfaces are examples of such surfaces. A translation surface $X$ is a pair $(X, \\omega)$ where $X$ is a Riemann surface of genus $g$, and $\\omega$ is a holomorphic one-form on $X$. The zeroes of this holomorphic one-form" +"---\nabstract: 'We investigate a recent claim that observed galaxy clusters produce an order of magnitude more galaxy-galaxy strong lensing (GGSL) than simulated clusters in a $\\Lambda$CDM cosmology. We take galaxy clusters from the [c-eagle]{}hydrodynamical simulations and calculate the expected amount of GGSL for sources placed behind the clusters at different redshifts. The probability of a source lensed by one of the most massive [c-eagle]{}clusters being multiply imaged by an individual cluster member is in good agreement with that inferred for observed clusters. We show that numerically converged results for the GGSL probability require higher resolution simulations than had been used previously. On top of this, different galaxy formation models predict cluster substructures with different central densities, such that the GGSL probabilities in $\\Lambda$CDM cannot yet be robustly predicted. Overall, we find that galaxy-galaxy strong lensing within clusters is not currently in tension with the $\\Lambda$CDM cosmological model.'\nauthor:\n- |\n Andrew Robertson[^1]\\\n Institute for Computational Cosmology, Durham University, South Road, Durham DH1 3LE, UK\nbibliography:\n- 'bibliography.bib'\ntitle: 'The galaxy-galaxy strong lensing cross-sections of simulated $\\Lambda$CDM galaxy clusters'\n---\n\n=1 =5\n\n\\[firstpage\\]\n\ngalaxies: clusters: general, gravitational lensing: strong\n\nIntroduction\n============\n\nStructure within a $\\Lambda$ Cold Dark Matter" +"---\nabstract: 'Our main aim in this paper is to investigate the rigidity of complete noncompact gradient steady Ricci solitons with harmonic Weyl tensor. More precisely, we prove that an $n$-dimensional ($n\\geq 5$) complete noncompact gradient steady Ricci soliton with harmonic Weyl tensor and multiply warped product metric is either Ricci flat or isometric to the Bryant soliton up to scaling. Meanwhile, for $n\\ge 5$, we provide a local structure theorem for $n$-dimensional connected (not necessarily complete) gradient Ricci solitons with harmonic Weyl curvature and multiply warped product metric.'\naddress: |\n 1. Mathematical Science Research Center\\\n Chongqing University of Technology\\\n Chongqing\\\n 400054. 2. School of Mathematical Sciences\\\n East China Normal University\\\n Shanghai\\\n 200241\nauthor:\n- Fengjiang Li\ntitle: Rigidity of Complete Gradient Steady Ricci Solitons with Harmonic Weyl Curvature\n---\n\nnamedef[subjclassname@2020]{}[ Mathematics Subject Classification]{}\n\nIntroduction\n============\n\nAn $n$-dimensional Riemannian manifold $(M^{n},g)$ is called a gradient Ricci soliton if there exists a smooth function $f$ on $M$ such that the Ricci tensor satisfies the following equation $$\\label{1.1}\nRic+Hess(f)=\\rho g$$ for some constant $\\rho$, where $Ric$ is the Ricci tensor of $g$ and $Hess(f)$ denotes the Hessian of the potential function $f$. The Ricci soliton is said to be shrinking, steady, or" +"---\nabstract: |\n We revisit the concept of \u201cadversary\u201d in online learning, motivated by solving robust optimization and adversarial training using online learning methods. While one of the classical setups in online learning deals with the \u201cadversarial\u201d setup, it appears that this concept is used less rigorously, causing confusion in applying results and insights from online learning. Specifically, there are two fundamentally different types of adversaries, depending on whether the \u201cadversary\u201d is able to anticipate the exogenous randomness of the online learning algorithms. This is particularly relevant to robust optimization and adversarial training because the adversarial sequences are often anticipative, and many online learning algorithms do not achieve diminishing regret in such a case.\n\n We then apply this to solving robust optimization problems or (equivalently) adversarial training problems via online learning and establish a general approach for a large variety of problem classes using *imaginary play*. Here two players play against each other, the primal player playing the decisions and the dual player playing realizations of uncertain data. When the game terminates, the primal player has obtained an approximately robust solution. This meta-game allows for solving a large variety of robust optimization and multi-objective optimization problems and generalizes the approach" +"---\nabstract: |\n We define the flow group of any component of any stratum of rooted abelian or quadratic differentials (those marked with a horizontal separatrix) to be the group generated by almost-flow loops. We prove that the flow group is equal to the fundamental group of the component. As a corollary, we show that the plus and minus modular Rauzy\u2013Veech groups are finite-index subgroups of their ambient modular monodromy groups. This partially answers a question of Yoccoz.\n\n Using this, and recent advances on algebraic hulls and Zariski closures of monodromy groups, we prove that the Rauzy\u2013Veech groups are Zariski dense in their ambient symplectic groups. Density, in turn, implies the simplicity of the plus and minus Lyapunov spectra of any component of any stratum of quadratic differentials. Thus, we establish the Kontsevich\u2013Zorich conjecture.\naddress:\n- |\n - Independent\\\n UK\n- |\n - CNRS - Universits\u00e9 de Bordeaux\\\n 351, cours de la Lib\u00e9ration\\\n 33400 Talence\n- |\n - School of Mathematics and Statistics\\\n University of Glasgow\\\n University Place, Glasgow, G128QQ UK\n- '- Centro de Modelamiento Matem\u00e1tico, CNRS-IRL 2807, Universidad de Chile, Beauchef 851, Santiago, Chile.'\n- |\n - Department of Mathematics\\\n University of Warwick\\\n Coventry, CV47AL UK\nauthor:\n-" +"---\nabstract: |\n Polls are a common way of collecting data, including product reviews and feedback forms. However, few data collectors give upfront privacy guarantees. Additionally, when privacy guarantees are given upfront, they are often vague claims about \u2018anonymity\u2019. Instead, we propose giving quantifiable privacy guarantees through the statistical notion of *differential privacy*. Nevertheless, privacy does not come for free. At the heart of differential privacy lies an inherent [trade-off]{}\u00a0between accuracy and privacy that needs to be balanced. Thus, it is vital to properly adjust the accuracy-privacy [trade-off]{}\u00a0before setting out to collect data.\n\n Motivated by the lack of [tools]{}\u00a0to gather poll data under differential privacy, we set out to engineer our own [tool]{}. Specifically, to make *local differential privacy* accessible for all, in this systems paper we present [[Randori]{}]{}, a set of novel open source [tools]{}\u00a0for differentially private poll data collection. [[Randori]{}]{}\u00a0is intended to help data analysts keep their focus on *what* data their poll is collecting, as opposed to *how* they should collect it. Our [tools]{}\u00a0also allow the data analysts to analytically predict the accuracy of their poll. Furthermore, we show that differential privacy alone is not enough to achieve end-to-end" +"---\nabstract: 'Using the framework of higher-form global symmetries, we examine the regime of validity of force-free electrodynamics by evaluating the lifetime of the electric field operator, which is non-conserved due to screening effects. We focus on a holographic model which has the same global symmetry as that of low energy plasma and obtain the lifetime of (non-conserved) electric flux in a strong magnetic field regime. The lifetime is inversely correlated to the magnetic field strength and thus suppressed in the strong field regime.'\nauthor:\n- Napat Poovuttikul\n- Aruna Rajagopal\nbibliography:\n- 'biblio.bib'\ntitle: |\n Operator lifetime and the force-free electrodynamic limit\\\n of magnetised holographic plasma\n---\n\nIntroduction\n============\n\nHydrodynamics [@LLfluid] is a well-established theoretical framework which universally describes the long wavelength, low frequency behaviour of interacting systems at finite temperature. Essentially, hydrodynamic theory is a description of conserved quantities and the manifestation of the corresponding symmetries in a system in thermal equilibrium. Theories with widely varying microscopics can have the same macroscopic hydrodynamic description. One possible explanation why such a universal description is possible is that all operators except conserved charges have parametrically short lifetimes compared to the scale of interest and, once the longest-lived non-conserved operator[^1] has" +"---\nauthor:\n- 'Abdelrahman Eldosouky, Tapadhir Das, Anuraag Kotra, and Shamik Sengupta [^1]'\nbibliography:\n- 'mybibfile.bib'\ntitle: |\n Finding the Sweet Spot for Data Anonymization:\\\n A Mechanism Design Perspective\n---\n\nrise of Big Data has helped generate tremendous amounts of digital information that are continually being collected, analyzed, and distributed. This technology has helped organizations personalize their services, optimize their decision making, and help predict future trends [@zyskind2015decentralizing]. Nevertheless, these operations tend to raise public concern due to the fact that much of the data contain user sensitive information. To address these concerns and preserve user privacy, organizations engage in deploying robust security mechanisms to protect their data against different forms of cyber-attacks [@badsha2019privacy]. Consequently, concepts like data security, privacy, and trust have recently received significant attention in the literature as different forms of preserving the data [@soria2017individual; @zhu2014correlated; @keshavarz2020real; @afghah2020cooperative; @boreale2019relative; @Domingo2019Steered].\n\nYet, conventional security mechanisms do not become handy when it comes to data sharing. For instance, encryption based mechanisms can help to secure data shared between different parts or sites of the same organization, e.g., patients\u2019 remote monitoring [@eldosouky2018cybersecurity]. However, it is not feasible to widely share encrypted data, among many organizations, due to key management issues." +"---\nabstract: 'Inverse problems are ubiquitous because they formalize the integration of data with mathematical models. In many scientific applications the forward model is expensive to evaluate, and adjoint computations are difficult to employ; in this setting derivative-free methods which involve a small number of forward model evaluations are an attractive proposition. Ensemble Kalman based interacting particle systems (and variants such as consensus based and unscented Kalman approaches) have proven empirically successful in this context, but suffer from the fact that they cannot be systematically refined to return the true solution, except in the setting of linear forward models\u00a0[@2019arXiv190308866G]. In this paper, we propose a new derivative-free approach to Bayesian inversion, which may be employed for posterior sampling or for maximum a posteriori (MAP) estimation, and may be systematically refined. The method relies on a fast/slow system of stochastic differential equations (SDEs) for the local approximation of the gradient of the log-likelihood appearing in a Langevin diffusion. Furthermore the method may be preconditioned by use of information from ensemble Kalman based methods (and variants), providing a methodology which leverages the documented advantages of those methods, whilst also being provably refineable. We define the methodology, highlighting its flexibility and many" +"---\nabstract: 'Monte Carlo event generators are a critical tool for the interpretation of data obtained by neutrino experiments. Several modern event generators are available which are well-suited to the GeV energy scale used in studies of accelerator neutrinos. However, theoretical modeling differences make their immediate application to lower energies difficult. In this paper, I present a new event generator, [`MARLEY`]{}, which is designed to better address the simulation needs of the low-energy (tens of MeV and below) neutrino community. The code is written in [C]{}14 with an optional interface to the popular [ROOT]{}\u00a0data analysis framework. The current release of [`MARLEY`]{}\u00a0(version [1.2.0]{}) emphasizes simulations of the reaction $\\isotope[40]{Ar}(\\nu_e, e^{-})\\isotope[40]{K}^{*}$ but is extensible to other channels with suitable user input. This paper provides detailed documentation of [`MARLEY`]{}\u2019s implementation and usage, including guidance on how generated events may be analyzed and how [`MARLEY`]{}\u00a0may be interfaced with external codes such as Geant4. Further information about [`MARLEY`]{}\u00a0is available on the official website at .'\naddress:\n- 'Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510, USA'\n- 'University of California, Davis, One Shields Avenue, Davis, CA 95616, USA'\nauthor:\n- 'S. Gardiner'\nbibliography:\n- 'marleycpc.bib'\ntitle: 'Simulating low-energy neutrino" +"---\nabstract: 'We provide a UTXO model of blockchain transactions that is able to represent both credit and debt on the same blockchain. Ordinarily, the UTXO model is solely used to represent credit and the representation of credit and debit together is achieved using the account model because of its support for balances. However, the UTXO model provides superior privacy, safety, and scalability when compared to the account model. In this work, we introduce a UTXO model that has the flexibility of balances with the usual benefits of the UTXO model. This model extends the conventional UTXO model, which represents credits as unmatched outputs, by representing debts as unmatched inputs. We apply our model to solving the problem of transparency in reverse mortgage markets, in which some transparency is necessary for a healthy market but complete transparency leads to adverse outcomes. Here the pseudonymous properties of the UTXO model protect the privacy of loan recipients while still allowing an aggregate view of the loan market. We present a prototype of our implementation in Tendermint and discuss the design and its benefits.'\nauthor:\n- Michael Chiu\n- Uro\u0161 Kalabi\u0107\nbibliography:\n- 'main.bib'\ntitle: 'Debt Representation in UTXO Blockchains[^1]'\n---\n\nIntroduction\n============" +"---\nabstract: |\n The method of Lie symmetry analysis of differential equations is applied to determine exact solutions for the Camassa-Choi equation and its generalization. We prove that the Camassa-Choi equation is invariant under an infinite-dimensional Lie algebra, with an essential five-dimensional Lie algebra. The application of the Lie point symmetries leads to the construction of exact similarity solutions.\n\n Keywords: Lie symmetries; Similarity solutions; Camassa-Choi; Long waves;\nauthor:\n- |\n Andronikos Paliathanasis[^1]\\\n [\u00a0*Institute of Systems Science, Durban University of Technology* ]{}\\\n [\u00a0*PO Box 1334, Durban 4000, Republic of South Africa*]{}\ntitle: 'Lie symmetry analysis and similarity solutions for the Camassa-Choi equations'\n---\n\nIntroduction\n============\n\nThe Camassa-Choi (CC) equation $$\\left( u_{t}+\\alpha u_{x}-uu_{x}+u_{xx}\\right) _{x}+u_{yy}=0. \\label{cc.01}$$was derived by Choi and Camassa in [@choi]$~$in order to describe weakly nonlinear internal waves in a two-fluid system. Parameter $\\alpha =h^{-1}$ describes the depth in the two-fluid system. CC equation can be seen as the two-dimensional extension of the Benjamin-Ono equation, indeed when $u_{yy}=0\n$, from (\\[cc.01\\]) the Benjamin-Ono equation is recovered. Because of the nonlinearity of (\\[cc.01\\]) there are not any known-exact solutions in the literature. Only recently the existence of small data global solutions was proven by Harrop and Marzula in [@cho2].\n\nIn" +"---\nabstract: |\n We show how to construct new Finsler metrics, in two and three dimensions, whose indicatrices are pedal curves or pedal surfaces of some other curves or surfaces. These Finsler metrics are generalizations of the famous slope metric, also called Matsumoto metric.\n\n Keywords: algebraic curves, pedal curves and surfaces, Finsler manifolds, curvature.\n\n MSC2010: 53C60, 14H50.\nauthor:\n- 'Pipatpong Chansri, Pattrawut Chansangiam and Sorin V. Sabau[^1]\\'\ntitle: Finslerian indicatrices as algebraic curves and surfaces\n---\n\n[^2]\n\nIntroduction\n============\n\nFinsler manifolds are natural generalizations of Riemannian manifolds in the same respect as normed spaces and Minkowski spaces are generalizations of Euclidean spaces.\n\nIn the case of the Euclidean space, or more general, of Riemannian manifolds, the space looks uniform and isotropic, that is, the same in all direction. However, our daily experiences as well as the metrics and distances naturally appearing in applications to real life problems in Physics, Computer science, biology, etc. show that the space is not isotropic, there exists same preferred directions (see [@AIM], [@MS], [@SSS], [@YS]).\n\nTo be more precise, we recall that a [*Finsler metric*]{} $(M,F)$ is given by specifying a Finsler norm $F:TM\\to \\mathbb{R}$ defined on the tangent space $(TM,M)$ of an $n$-dimensional manifold" +"---\nbibliography:\n- 'ThermoOptBistabilitySiLever.bib'\n---\n\n[ **Thermo-optical bistability in silicon micro-cantilevers** ]{}\n\nBasile Pottier, Ludovic Bellon^\\*^\n\nUniv Lyon, Ens de Lyon, CNRS, Laboratoire de Physique, F-69342 Lyon, France\\\n\\* ludovic.bellon@ens-lyon.fr\n\nAbstract {#abstract .unnumbered}\n========\n\n[**We report a thermo-optical bistability observed in silicon micro-cantilevers irradiated by a laser beam with mW powers: reflectivity, transmissivity, absorption, and temperature can change by a factor of two between two stable states for the same input power. The temperature dependency of the absorption at the origin of the bistability results from interferences between internal reflections in the cantilever thickness, acting as a lossy Fabry-P\u00e9rot cavity. A theoretical model describing the thermo-optical coupling is presented. The experimental results obtained for silicon cantilevers irradiated in vacuum at two different visible wavelengths are in quantitative agreement with the predictions of this model.**]{}\n\n------------------------------------------------------------------------\n\nIntroduction\n============\n\nMicrometer sized resonators, such as membranes or cantilever, are used as precision sensors in a broad range of applications: mass sensing down to $\\SI{e-18}{g}$\u00a0[@Ono-2003], single molecule light absorption imaging\u00a0[@Chien-2018], force detection with $\\SI{e-19}{N}$ sensitivity in Atomic Force Microscopy (AFM)\u00a0[@Mamin-2001], quantum measurements\u00a0[@Verhagen-2012], to cite just a few applications. To address those mechanical devices and actually perform the measurement, light is" +"---\nabstract: 'This masters project used the recent ATLAS jet substructure measurements to see if any improvements can be made to the commonly used Pythia8 Monash and A14 tunes.'\naddress:\n- |\n School of Physics, University of Witwatersrand\\\n Johannesburg, South Africa.\\\n Email: deepak.kar@cern.ch\n- |\n Department of Physics, BITS Pilani\\\n Rajasthan, India.\\\n Email: pratixan123@gmail.com\nauthor:\n- Deepak Kar\n- Pratixan Sarmah\nbibliography:\n- 'ref.bib'\ntitle: Effect of new jet substructure measurements on Pythia8 tunes\n---\n\nPythia8, jet substructure, FSR, tune\n\nIntroduction {#sec:intro}\n============\n\nThe commonly used Pythia8\u00a0[@Sjostrand:2007gs; @Sjostrand:2014zea] tunes, Monash\u00a0[@Skands:2014pea] and A14\u00a0[@ATL-PHYS-PUB-2014-021] are rather dated, and the latter was observed to have some tension with LEP measurements, primarily due to its lower Final State Radiation (FSR) $\\alpha_{s}$ value. In last couple of years, a plethora of jet substructure\u00a0[@Altheimer:2012mn; @Altheimer:2013yza; @Marzani:2019hun; @Asquith:2018igt] measurements have been published by both ATLAS and CMS collaborations, utilising LHC Run 2 data. Here, we investigate the effect of four such ATLAS measurements on parameters sensitive to jet substructure observables.\n\nTuning setup {#sec:setup}\n============\n\nThe following ATLAS measurements were considered in this study (along with their Rivet identifiers):\n\n- Soft-Drop Jet Mass\u00a0[@Aaboud:2017qwh](ATLAS\\_2017\\_I1637587)\n\n- Jet substructure measurements in multijet events\u00a0[@Aaboud:2019aii] (ATLAS\\_2019\\_I1724098)\n\n-" +"---\nabstract: |\n In general relativity, the description of spacetime relies on idealised rods and clocks, which identify a reference frame. In any concrete scenario, reference frames are associated to physical systems, which are ultimately quantum in nature. A relativistic description of the laws of physics hence needs to take into account such quantum reference frames (QRFs), through which spacetime can be given an operational meaning.\n\n Here, we introduce the notion of a spacetime quantum reference frame, associated to a quantum particle in spacetime. Such formulation has the advantage of treating space and time on equal footing, and of allowing us to describe the dynamical evolution of a set of quantum systems from the perspective of another quantum system, where the parameter in which the rest of the physical systems evolves coincides with the proper time of the particle taken as the QRF. Crucially, the proper times in two different QRFs are not related by a standard transformation, but they might be in a quantum superposition one with respect to the other.\n\n Concretely, we consider a system of $N$ relativistic quantum particles in a weak gravitational field, and introduce a timeless formulation in which the global state of the $N$" +"---\nabstract: 'LiteBIRD, the Lite (Light) satellite for the study of $B$-mode polarization and Inflation from cosmic background Radiation Detection, is a space mission for primordial cosmology and fundamental physics. JAXA selected LiteBIRD in May 2019 as a strategic large-class (L-class) mission, with its expected launch in the late 2020s using JAXA\u2019s H3 rocket. LiteBIRD plans to map the cosmic microwave background (CMB) polarization over the full sky with unprecedented precision. Its main scientific objective is to carry out a definitive search for the signal from cosmic inflation, either making a discovery or ruling out well-motivated inflationary models. The measurements of LiteBIRD will also provide us with an insight into the quantum nature of gravity and other new physics beyond the standard models of particle physics and cosmology. To this end, LiteBIRD will perform full-sky surveys for three years at the Sun-Earth Lagrangian point L2 for 15 frequency bands between 34 and 448GHz with three telescopes, to achieve a total sensitivity of 2.16$\\mu$K-arcmin with a typical angular resolution of 0.5$^\\circ$ at 100GHz. We provide an overview of the LiteBIRD project, including scientific objectives, mission requirements, top-level system requirements, operation concept, and expected scientific outcomes.'\nauthor:\n- 'M.\u00a0Hazumi'\n- 'P.A.R." +"---\nabstract: 'Deep hashing methods have been shown to be the most efficient approximate nearest neighbor search techniques for large-scale image retrieval. However, existing deep hashing methods have a poor small-sample ranking performance for case-based medical image retrieval. The top-ranked images in the returned query results may be as a different class than the query image. This ranking problem is caused by classification, regions of interest (ROI), and small-sample information loss in the hashing space. To address the ranking problem, we propose an end-to-end framework, called Attention-based Triplet Hashing (ATH) network, to learn low-dimensional hash codes that preserve the classification, ROI, and small-sample information. We embed a spatial-attention module into the network structure of our ATH to focus on ROI information. The spatial-attention module aggregates the spatial information of feature maps by utilizing max-pooling, element-wise maximum, and element-wise mean operations jointly along the channel axis. To highlight the essential role of classification in di\ufb00erentiating case-based medical images, we propose a novel triplet cross-entropy loss to achieve maximal class-separability and maximal hash code-discriminability simultaneously during model training. The triplet cross-entropy loss can help to map the classification information of images and similarity between images into the hash codes. Moreover, by adopting" +"---\nabstract: 'Relativistic runaway electron avalanches (RREAs) imply a large multiplication of high energy electrons ($\\sim$1\u00a0MeV). Two factors are necessary for this phenomenon: a high electric field sustained over a large distance and an energetic particle to serve as a seed. The former sustains particle energies as they keep colliding and lose energy randomly; and the latter serves as a multiplication starting point that promotes avalanches. RREA is usually connected to both terrestrial gamma-ray flashes (TGFs) and gamma-ray glows (also known as Thunderstorm Ground Enhancement (TGE) when detected at ground level) as possible generation mechanism of both events, but the current knowledge does not provide a clear relationship between these events (TGF and TGE), beyond their possible common source mechanism, still as they have different characteristics. In particular, their timescales differ by several orders of magnitude. This work shows that chain reactions by TGF byproducts can continue for the timescale of gamma-ray glows and even provide energetic particles as seeds for RREAs of gamma-ray glows.'\nbibliography:\n- 'main.bib'\n---\n\n[**** ]{}\\\nG. Diniz^1,\\*^, I.S. Ferreira^2^, Y. Wada^1^, T. Enoto^1^,\\\n**[1]{} Extreme Natural Phenomena RIKEN Hakubi Research Team, RIKEN Cluster for Pioneering Research, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan\\\n**[2]{}" +"---\nabstract: 'Precise segmentation of a lesion area is important for optimizing its treatment. Deep learning makes it possible to detect and segment a lesion field using annotated data. However, obtaining precisely annotated data is very challenging in the medical domain. Moreover, labeling uncertainty and imprecision make segmentation results unreliable. In this paper, we address the uncertain boundary problem by a new evidential neural network with an information fusion strategy, and the scarcity of annotated data by semi-supervised learning. Experimental results show that our proposal has better performance than state-of-the-art methods.'\naddress: |\n $^{\\dagger}$ Universit\u00e9 de technologie de Compi\u00e8gne, CNRS, Heudiasyc, Compi\u00e8gne, France\\\n $^{\\star}$ University of Rouen Normandy, Quantif, LITIS,Rouen, France\\\n $^{\\S }$ Institut universitaire de France, Paris, France\nbibliography:\n- 'strings.bib'\n- 'refs.bib'\ntitle: 'Belief function-based semi-supervised learning for brain tumor segmentation '\n---\n\nbelief functions, semi-supervised learning, evidential fusion, brain tumor segmentation\n\nIntroduction {#sec:intro}\n============\n\nDeep learning has achieved great success in many computer vision tasks with abundant labeled training data, such as image recognition, object detection and segmentation, image generation, etc. However, acquiring big labeled training data in the medical domain is particularly challenging, especially for image segmentation. Region labeling in medical image segmentation tasks requires not" +"---\nauthor:\n- 'Rapha\u00ebl Cerf [^1] [^2]Barbara Dembin[^3]'\nbibliography:\n- 'biblio.bib'\ntitle: 'The time constant for Bernoulli percolation is Lipschitz continuous strictly above $p_c$ '\n---\n\n=1\n\n*Dedicated to the memory of Vladas Sidoravicius [^4]*\n\n**Abstract**: We consider the standard model of i.i.d. first passage percolation on ${\\mathbb{Z}}^d$ given a distribution $G$ on $[0,+\\infty]$ ($+\\infty$ is allowed). When it is known that the time constant $\\mu_G$ exists. We are interested in the regularity properties of the map $G\\mapsto\\mu_G$. We first study the specific case of distributions of the form $G_p=p\\delta_1+(1-p)\\delta_\\infty$ for $p>p_c(d)$. In this case, the travel time between two points is equal to the length of the shortest path between the two points in a bond percolation of parameter $p$. We show that the function $p\\mapsto \\mu_{G_p}$ is Lipschitz continuous on every interval $[p_0,1]$, where $p_0>p_c(d)$.\n\n*AMS 2010 subject classifications:* primary 60K35, secondary 82B43.\n\n*Keywords:* First passage percolation, time constant.\n\nIntroduction\n============\n\nThe model of first passage percolation was first introduced by Hammersley and Welsh [@HammersleyWelsh] as a model for the spread of a fluid in a porous medium. Let $d\\geq 2$. We consider the graph $({\\mathbb{Z}}^d,{\\mathbb{E}}^d)$ having for vertices ${\\mathbb{Z}}^d$ and for edges ${\\mathbb{E}}^d$ the set of the" +"---\nabstract: 'In this paper, we present a method of clothes retargeting; generating the potential poses and deformations of a given 3D clothing template model to fit onto a person in a single RGB image. The problem is fundamentally ill-posed as attaining the ground truth data is impossible, i.e., images of people wearing the different 3D clothing template model at exact same pose. We address this challenge by utilizing large-scale synthetic data generated from physical simulation, allowing us to map 2D dense body pose to 3D clothing deformation. With the simulated data, we propose a semi-supervised learning framework that validates the physical plausibility of the 3D deformation by matching with the prescribed body-to-cloth contact points and clothing silhouette to fit onto the unlabeled real images. A new neural clothes retargeting network (CRNet) is designed to integrate the semi-supervised retargeting task in an end-to-end fashion. In our evaluation, we show that our method can predict the realistic 3D pose and deformation field needed for retargeting clothes models in real-world examples.'\nauthor:\n- |\n Jae Shin Yoon$^\\dagger$ Kihwan Kim$^\\sharp$ Jan Kautz$^\\sharp$ Hyun Soo Park$^\\dagger$\\\n $^\\dagger$University of Minnesota $^\\sharp$NVIDIA\\\nbibliography:\n- 'arxiv.bib'\ntitle: Neural 3D Clothes Retargeting from a Single Image\n---\n\nIntroduction" +"---\nabstract: 'Reconfigurable intelligent surface (RIS) is an emerging technique employing metasurface to reflect the signal from the source node to the destination node without consuming any energy. Not only the spectral efficiency but also the energy efficiency can be improved through RIS. Essentially, RIS can be considered as a passive relay between the source and destination node. On the other hand, a relay node in a traditional relay network has to be active, which indicates that it will consume energy when it is relaying the signal or information between the source and destination nodes. In this paper, we compare the performances between RIS and active relay for a general multiple-input multiple-output (MIMO) system. To make the comparison fair and comprehensive, both the performances of RIS and active relay are optimized with best-effort. In terms of the RIS, transmit beamforming and reflecting coefficient at the RIS are jointly optimized so as to maximize the end-to-end throughput. Although the optimization problem is non-convex, it is transformed equivalently to a weighted mean-square error (MSE) minimization problem and an alternating optimization problem is proposed, which can ensure the convergence to a stationary point. In terms of active relay, both half duplex relay (HDR)" +"---\nabstract: 'Fractional equations have become the model of choice in several applications where heterogeneities at the microstructure result in anomalous diffusive behavior at the macroscale. In this work we introduce a new fractional operator characterized by a doubly-variable fractional order and possibly truncated interactions. Under certain conditions on the model parameters and on the regularity of the fractional order we show that the corresponding Poisson problem is well-posed. We also introduce a finite element discretization and describe an efficient implementation of the finite-element matrix assembly in the case of piecewise constant fractional order. Through several numerical tests, we illustrate the improved descriptive power of this new operator across media interfaces. Furthermore, we present one-dimensional and two-dimensional $h$-convergence results that show that the variable-order model has the same convergence behavior as the constant-order model.'\nauthor:\n- 'Marta D\u2019Elia[^1]'\n- 'Christian Glusa[^2]'\nbibliography:\n- 'references.bib'\nnocite: '[@*]'\ntitle: 'A fractional model for anomalous diffusion with increased variability. Analysis, algorithms and applications to interface problems'\n---\n\nVariable-order fractional operators, anomalous diffusion, subsurface diffusion, interface problems\n\nIntroduction\n============\n\nNonlocal models are becoming a popular alternative to partial differential equations (PDEs) when the latter fail to capture effects such as multiscale and anomalous behavior." +"---\nabstract: 'A magnon Bose-Einstein condensate in superfluid $^3$He is a fine instrument for studying the surrounding macroscopic quantum system. At zero temperature, the BEC is subject to a few, distinct forms of decay into other collective excitations, owing to momentum and energy conservation in a quantum vacuum. We study the vortex-Higgs mechanism: the vortices relax the requirement for momentum conservation, allowing the optical magnons of the BEC to transform into light Higgs quasiparticles. This observation expands the spectrum of possible interactions between magnetic quasiparticles in $^3$He-B, opens pathways for hunting down elusive phenomena such as the Kelvin wave cascade or bound Majorana fermions, and lays groundwork for building magnon-based quantum devices.'\nauthor:\n- 'S.\u00a0Autti$^{1,2\\ast}$'\n- 'P.J. Heikkinen$^{1,3}$'\n- 'S.M. Laine$^4$'\n- 'J.T. M\u00e4kinen$^{1,5,6}$'\n- 'E.V. Thuneberg$^{4,7}$'\n- 'V.V. Zavjalov$^{1,2}$'\n- 'V.B.\u00a0Eltsov$^{1}$'\ntitle: 'Vortex-mediated relaxation of magnon BEC into light Higgs quasiparticles'\n---\n\nOne illuminating perspective to the ground state of a fermionic condensate, such as zero-temperature superfluid $^3$He, is to treat it as a quantum vacuum where moving objects interact with the excitations of the vacuum [@VolovikBook; @bradley2016breaking; @PhysRevB.98.144512; @PhysRevResearch.2.033013; @autti2020fundamental]. Various collective excitations, for example magnetic quasiparticles (magnons), and topological defects such as quantised vortices can" +"---\nabstract: '*Fine-grained* sentiment analysis attempts to extract sentiment holders, targets and polar expressions and resolve the relationship between them, but progress has been hampered by the difficulty of annotation. *Targeted* sentiment analysis, on the other hand, is a more narrow task, focusing on extracting sentiment targets and classifying their polarity. In this paper, we explore whether incorporating holder and expression information can improve target extraction and classification and perform experiments on eight English datasets. We conclude that jointly predicting target and polarity BIO labels improves target extraction, and that augmenting the input text with gold expressions generally improves targeted polarity classification. This highlights the potential importance of annotating expressions for fine-grained sentiment datasets. At the same time, our results show that performance of current models for predicting polar expressions is poor, hampering the benefit of this information in practice.'\nauthor:\n- |\n Jeremy Barnes, Lilja [\u00d8]{}vrelid, and Erik Velldal\\\n University of Oslo\\\n Department of Informatics\\\n [{jeremycb,liljao,erikve}@ifi.uio.no]{}\nbibliography:\n- 'lit.bib'\ntitle: |\n If you\u2019ve got it, flaunt it:\\\n Making the most of fine-grained sentiment annotations\n---\n\nIntroduction {#sec:intro}\n============\n\nSentiment analysis comes in many flavors, arguably the most complete of which is what is often called fine-grained sentiment analysis [@Wiebe2005;" +"---\nabstract: 'Entity Linking is one of the essential tasks of information extraction and natural language understanding. Entity linking mainly consists of two tasks: recognition and disambiguation of named entities. Most studies address these two tasks separately or focus only on one of them. Moreover, most of the state-of-the -art entity linking algorithms are either supervised, which have poor performance in the absence of annotated corpora or language-dependent, which are not appropriate for multi-lingual applications. In this paper, we introduce an Unsupervised Language-Independent Entity Disambiguation (ULIED), which utilizes a novel approach to disambiguate and link named entities. Evaluation of ULIED on different English entity linking datasets as well as the only available Persian dataset illustrates that ULIED in most of the cases outperforms the state-of-the-art unsupervised multi-lingual approaches.'\nauthor:\n- |\n Majid Asgari-Bidhendi\\\n School of Computer Engineering\\\n Iran University of Science and Technology\\\n Tehran, Iran\\\n `majid.asgari@gmail.com`\\\n Behrooz Janfada\\\n School of Computer Engineering\\\n Iran University of Science and Technology\\\n Tehran, Iran\\\n `behrooz.janfada@gmail.com`\\\n Amir Havangi\\\n School of Computer Engineering\\\n Iran University of Science and Technology\\\n Tehran, Iran\\\n `havangi@yahoo.com`\\\n Sayyed Ali Hossayni\\\n School of Computer Engineering\\\n Iran University of Science and Technology\\\n Tehran, Iran\\\n `hossayni@iran.ir`\\\n Behrouz Minaei-Bidgoli\\\n School of Computer Engineering\\\n Iran University" +"---\nabstract: 'Quantum information processing is steadily progressing from a purely academic discipline towards applications throughout science and industry. Transitioning from lab-based, proof-of-concept experiments to robust, integrated realizations of quantum information processing hardware is an important step in this process. However, the nature of traditional laboratory setups does not offer itself readily to scaling up system sizes or allow for applications outside of laboratory-grade environments. This transition requires overcoming challenges in engineering and integration without sacrificing the state-of-the-art performance of laboratory implementations. Here, we present a 19-inch rack quantum computing demonstrator based on [$^{40}$$^+$\u00a0]{}optical qubits in a linear Paul trap to address many of these challenges. We outline the mechanical, optical, and electrical subsystems. Further, we describe the automation and remote access components of the quantum computing stack. We conclude by describing characterization measurements relevant to quantum computing including site-resolved single-qubit interactions, and entangling operations mediated by the interaction delivered via two distinct addressing approaches. Using this setup we produce maximally-entangled Greenberger\u2013Horne\u2013Zeilinger states with up to 24 ions without the use of post-selection or error mitigation techniques; on par with well-established conventional laboratory setups.'\naddress:\n- '$^1$ Institut f[\u00fc]{}r Experimentalphysik, 6020 Innsbruck, Austria'\n- '$^2$ Alpine Quantum Technologies (AQT)," +"---\nabstract: |\n Sequential likelihood ratio testing is found to be most powerful in sequential studies with early stopping rules when grouped data come from the one-parameter exponential family. First, to obtain this elusive result, the probability measure of a group sequential design is constructed with support for all possible outcome events, as is useful for designing an experiment prior to having data. This construction identifies impossible events that are not part of the support. The overall probability distribution is dissected into stage specific components. These components are sub-densities of interim test statistics first described by Armitage, McPherson and Rowe (1969) that are commonly used to create stopping boundaries given an $\\alpha$-spending function and a set of interim analysis times. Likelihood expressions conditional on reaching a stage are given to connect pieces of the probability anatomy together.\n\n The reduction of the support caused by the adoption of an early stopping rule induces sequential truncation (not nesting) in the probability distributions of possible events. Multiple testing induces mixtures on the adapted support. Even asymptotic distributions of inferential statistics are mixtures of truncated distributions. In contrast to the classical result on local asymptotic normality (Le Cam 1960), statistics that are asymptotically normal" +"---\nabstract: 'Regularized kernel-based methods such as support vector machines (SVMs) typically depend on the underlying probability measure $\\textnormal{P}$ (respectively an empirical measure $\\textnormal{D}_n$ in applications) as well as on the regularization parameter $\\lambda$ and the kernel $k$. Whereas classical statistical robustness only considers the effect of small perturbations in $\\textnormal{P}$, the present paper investigates the influence of simultaneous slight variations in the whole triple $(\\textnormal{P},\\lambda,k)$, respectively $(\\textnormal{D}_n,\\lambda_n,k)$, on the resulting predictor. Existing results from the literature are considerably generalized and improved. In order to also make them applicable to big data, where regular SVMs suffer from their super-linear computational requirements, we show how our results can be transferred to the context of localized learning. Here, the effect of slight variations in the applied regionalization, which might for example stem from changes in $\\textnormal{P}$ respectively $\\textnormal{D}_n$, is considered as well.'\nauthor:\n- |\n **Hannes K\u00f6hler**[^1]\u00a0 and **Andreas Christmann**\\\n Department of Mathematics, University of Bayreuth, Germany\ntitle: '**Total Stability of SVMs and Localized SVMs**'\n---\n\nIntroduction {#Sec:Rob_Intro}\n============\n\nLet [[${{\\ensuremath{\\mathcal{X}}\\xspace}}\\times{{\\ensuremath{\\mathcal{Y}}\\xspace}}$]{}]{}be a set and let ${{\\ensuremath{\\textnormal{P}}\\xspace}}$ be the distribution of a pair of random variables $(X,Y)$ with values in [[${{\\ensuremath{\\mathcal{X}}\\xspace}}\\times{{\\ensuremath{\\mathcal{Y}}\\xspace}}$]{}]{}where $X$ is the input variable and $Y$ is the real-valued output variable." +"---\nabstract: 'Deep Convolutional Neural Networks (DCNNs) have shown promising performances in several visual recognition problems which motivated the researchers to propose popular architectures such as LeNet, AlexNet, VGGNet, ResNet, and many more. These architectures come at a cost of high computational complexity and parameter storage. To get rid of storage and computational complexity, deep model compression methods have been evolved. We propose a \u201cHistory Based Filter Pruning (HBFP)\" method that utilizes network training history for filter pruning. Specifically, we prune the redundant filters by observing similar patterns in the filter\u2019s $\\ell_{1}$-norms (absolute sum of weights) over the training epochs. We iteratively prune the redundant filters of a CNN in three steps. First, we train the model and select the filter pairs with redundant filters in each pair. Next, we optimize the network to ensure an increased measure of similarity between the filters in a pair. This optimization of the network facilitates us to prune one filter from each pair based on its importance without much information loss. Finally, we retrain the network to regain the performance, which is dropped due to filter pruning. We test our approach on popular architectures such as LeNet-5 on MNIST dataset; VGG-16, ResNet-56, and" +"---\nabstract: 'In the last few years, three major topics received increased interest: deep learning, NLP and conversational agents. Bringing these three topics together to create an amazing digital customer experience and indeed deploy in production and solve real-world problems is something innovative and disruptive. We introduce a new Portuguese financial domain language representation model called BERTa\u00fa. BERTa\u00fa is an uncased BERT-base trained from scratch with data from the Ita\u00fa virtual assistant chatbot solution. The novelty of this contribution lies in that BERTa\u00fa pretrained language model requires less data, reaches state-of-the-art performance in three NLP tasks, and generates a smaller and lighter model that makes the deployment feasible. We developed three tasks to validate our model: information retrieval with Frequently Asked Questions (FAQ) from Ita\u00fa bank, sentiment analysis from our virtual assistant data, and a NER solution. All proposed tasks are real-world solutions in production on our environment and the usage of a specialist model proved to be effective when compared to `Google BERT multilingual` and the Facebook\u2019s `DPRQuestionEncoder`, available at Hugging Face. `BERTa\u00fa` improves the performance in $22\\%$ of FAQ Retrieval MRR metric, $2.1\\%$ in Sentiment Analysis F$_1$ score, $4.4\\%$ in NER F$_1$ score. It can also represent the" +"---\nabstract: 'A simulation tool capable of speeding up the calculation for linear poroelasticity problems in heterogeneous porous media is of large practical interest for engineers, in particular, to effectively perform sensitivity analyses, uncertainty quantification, optimization, or control operations on the fluid pressure and bulk deformation fields. Towards this goal, we present here a non-intrusive model reduction framework using proper orthogonal decomposition (POD) and neural networks, based on the usual offline-online paradigm. As the conductivity of porous media can be highly heterogeneous and span several orders of magnitude, we utilize the interior penalty discontinuous Galerkin (DG) method as a full order solver to handle discontinuity and ensure local mass conservation during the offline stage. We then use POD as a data compression tool and compare the nested POD technique, in which time and uncertain parameter domains are compressed consecutively, to the classical POD method in which all domains are compressed simultaneously. The neural networks are finally trained to map the set of uncertain parameters, which could correspond to material properties, boundary conditions, or geometric characteristics, to the collection of coefficients calculated from an $L^2$ projection over the reduced basis. We then perform a non-intrusive evaluation of the neural networks to" +"---\nabstract: 'Deep learning is a rapidly-evolving technology with the possibility to significantly improve the physics reach of collider experiments. In this study we developed a novel vertex finding algorithm for future lepton colliders such as the International Linear Collider. We deploy two networks: one consists of simple fully-connected layers to look for vertex seeds from track pairs, and the other is a customized Recurrent Neural Network with an attention mechanism and an encoder-decoder structure to associate tracks to the vertex seeds. The performance of the vertex finder is compared with the standard ILC vertex reconstruction algorithm.'\naddress:\n- 'Department of Physics, Graduate School of Science, Kyushu University'\n- 'Department of Physics, Faculty of Science, Kyushu University'\n- 'Research Center for Advanced Particle Physics (RCAPP), Kyushu University'\n- 'Department of Physics, Graduate School of Science, The University of Tokyo'\n- 'Osaka University Institute for Datability Science (IDS)'\n- 'Department of Mathematics and Physics, Graduate School of Science, Osaka City University'\n- 'Nambu Yoichiro Institute of Theoretical and Experimental Physics (NITEP), Osaka City University'\n- 'Research Center for Nuclear Physics (RCNP), Osaka University'\nauthor:\n- Kiichi Goto\n- Taikan Suehara\n- Tamaki Yoshioka\n- Masakazu Kurata\n- Hajime Nagahara\n- Yuta" +"---\nabstract: 'Theory-based scaling laws of the near and far scrape-off layer (SOL) widths are analytically derived for L-mode diverted tokamak discharges by using a two-fluid model. The near SOL pressure and density decay lengths are obtained by leveraging a balance among the power source, perpendicular turbulent transport across the separatrix, and parallel losses at the vessel wall, while the far SOL pressure and density decay lengths are derived by using a model of intermittent transport mediated by filaments. The analytical estimates of the pressure decay length in the near SOL is then compared to the results of three-dimensional, flux-driven, global, two-fluid turbulence simulations of L-mode diverted tokamak plasmas, and validated against experimental measurements taken from an experimental multi-machine database of divertor heat flux profiles, showing in both cases a very good agreement. Analogously, the theoretical scaling law for the pressure decay length in the far SOL is compared to simulation results and to experimental measurements in TCV L-mode discharges, pointing out the need of a large multi-machine database for the far SOL decay lengths.'\naddress:\n- '$^1$Ecole Polytechnique F\u00e9d\u00e9rale de Lausanne (EPFL), Swiss Plasma Center (SPC), CH-1015 Lausanne, Switzerland'\n- '$^2$Politecnico di Milano, Via Ponzio 34/3, 20133 Milan, Italy'" +"---\nabstract: 'In this paper, we propose a novel SpatioTemporal convolutional Dense Network (STDNet) to address the video-based crowd counting problem, which contains the decomposition of 3D convolution and the 3D spatiotemporal dilated dense convolution to alleviate the rapid growth of the model size caused by the Conv3D layer. Moreover, since the dilated convolution extracts the multiscale features, we combine the dilated convolution with the channel attention block to enhance the feature representations. Due to the error that occurs from the difficulty of labeling crowds, especially for videos, imprecise or standard-inconsistent labels may lead to poor convergence for the model. To address this issue, we further propose a new patch-wise regression loss (PRL) to improve the original pixel-wise loss. Experimental results on three video-based benchmarks, i.e., the UCSD, Mall and WorldExpo\u201910 datasets, show that STDNet outperforms both image- and video-based state-of-the-art methods. The source codes are released at .'\nauthor:\n- 'Yu-Jen Ma, Hong-Han Shuai,, and Wen-Huang Cheng,'\nbibliography:\n- 'reference.bib'\ntitle: 'Spatiotemporal Dilated Convolution with Uncertain Matching for Video-based Crowd Estimation'\n---\n\nCrowd counting, density map regression, spatiotemporal modeling, dilated convolution, patch-wise regression loss.\n\nIntroduction {#Intro}\n============\n\ncounting plays an important role in computer vision since it facilitates a" +"---\nabstract: 'The present study focuses on the receptor driven endocytosis typical of viral entry into a cell. A locally increased density of receptors at the time of contact between the cell and the virus is necessary in this case. The virus is considered as a substrate with fixed receptors on its surface, whereas the receptors of the host cell are free to move over its membrane, allowing a local change in their concentration. In the contact zone the membrane inflects and forms an envelope around the virus. The created vesicle imports its cargo into the cell. This paper assumes the diffusion equation accompanied by boundary conditions requiring the conservation of binders to describe the process. Moreover, it introduces a condition defining the energy balance at the front of the adhesion zone. The latter yields the upper limit for the size of virus which can be engulfed by the cell membrane. The described moving boundary problem in terms of the binder density and the velocity of the adhesion front is well posed and numerically solved by using the finite difference method. The illustrative examples have been chosen to show the influence of the process parameters on the initiation and the" +"---\nauthor:\n- |\n Yongyi Zhao\\*, Ankit Raghuram\\*, Hyun K. Kim, Andreas H. Hielscher,\\\n Jacob T. Robinson, and Ashok Veeraraghavan\n- 'Paper ID [16]{}'\nbibliography:\n- 'dotbib.bib'\ntitle: 'High Resolution, Deep Imaging Using Confocal Time-of-flight Diffuse Optical Tomography'\n---\n\nscattering by tissue is the primary challenge limiting our ability to exploit non-ionizing, optical radiation in the 400-1000 wavelength range, to perform high-resolution structural or functional imaging, deep inside the human body. Most existing techniques, including confocal microscopy, two-photon (2P) microscopy, and optical coherence tomography (OCT), exploit only the ballistic (or single-scattered) photons and can only be used to image within the ballistic regime ($<15$ mean scattering lengths deep) [@Pediredla2016; @oh2019skin]. This limits imaging to approximately the top 1-2 millimeters of tissue surface (as mean scattering lengths in tissue is $\\approx 50-150$ range [@Pediredla2016; @Bevilacqua1999]) as seen in Fig. \\[fig:dot-comparison\\]a. Many applications (both clinical and scientific) require imaging at much higher depths of penetration than can be achieved by remaining within the ballistic regime.\n\nDiffuse optical tomography (DOT) [@boas2001imaging] has emerged as one of the most promising techniques (another being photo-acoustic tomography [@Xia2014]) for high-resolution imaging deep within tissue, in the diffusion regime (i.e., $>50$ mean scattering lengths). The idea in" +"---\nabstract: |\n \\[abs\\]\n\n This paper presents new *variance-aware* confidence sets for linear bandits and linear mixture Markov Decision Processes (MDPs). With the new confidence sets, we obtain the follow regret bounds:\n\n - For linear bandits, we obtain an $\\widetilde{O}(\\operatorname*{poly}(d)\\sqrt{1 + \\sum_{k=1}^{K}\\sigma_k^2})$ data-dependent regret bound, where $d$ is the feature dimension, $K$ is the number of rounds, and $\\sigma_k^2$ is the *unknown* variance of the reward at the $k$-th round. This is the first regret bound that only scales with the variance and the dimension but *no explicit polynomial dependency on $K$*. When variances are small, this bound can be significantly smaller than the $\\widetilde{\\Theta}\\left(d\\sqrt{K}\\right)$ worst-case regret bound.\n\n - For linear mixture MDPs, we obtain an $\\widetilde{O}(\\operatorname*{poly}(d, \\log H)\\sqrt{K})$ regret bound, where $d$ is the number of base models, $K$ is the number of episodes, and $H$ is the planning horizon. This is the first regret bound that only scales *logarithmically* with $H$ in the reinforcement learning with linear function approximation setting, thus *exponentially improving* existing results, and resolving an open problem in [@zhou2020nearly].\n\n We develop three technical ideas that may be of independent interest: 1) applications of the peeling technique to both the input norm and the variance magnitude," +"---\nabstract: 'Quantum Ising model on a triangular lattice hosts a finite temperature Berezinskii-Kosterlitz-Thouless (BKT) phase with emergent U(1) symmetry, and it will transit into an up-up-down (UUD) phase with $C_3$ symmetry breaking upon an infinitesimal external field along the longitudinal direction, but the overall phase diagram spanned by the axes of external field and temperature remains opaque due to the lack of systematic invesitgations with controlled methodologies. By means of quantum Monte Carlo at finite temperature and ground state density matrix renormalization group simulations, we map out the phase diagram of triangular quantum Ising model. Starting from the upper BKT temperature at zero field, we obtain the phase boundary between the UUD and paramagnetic phases with its 2D $q=3$ Potts universality at weak field and weakly first order transition at strong field. Originated from the lower BKT temperature at zero field, we analyze the low temperature phase boundary between the clock phase and the UUD phase with Ising symmetry breaking at weak fields and the quantum phase transition between the UUD and fully polarized phases at strong fields. The accurate many-body numerical results are consistent with our field theoretical analysis. The experimental relevance towards the BKT magnet TmMgGaO$_4$ and" +"---\nauthor:\n- Luigi Alfonsi\n- 'and David S. Berman'\nbibliography:\n- 'sample.bib'\ntitle: Double Field Theory and Geometric Quantisation\n---\n\nIntroduction\n============\n\nGeometric quantisation provides an approach to quantisation that is underpinned by the symplectic geometry of phase space. Its emergence in the 1970s from the work of Kostant and Souriau has produced a geometric approach to quantisation that provides numerous insights into the quantisation procedure. In particular, it showed how the symplectic symmetry of phase space is broken in naive quantisation methods even though the physics is left invariant and how the underlying symplectic symmetry maybe restored (or even extended to the metaplectic group). In more mundane language, classical Hamiltonian physics is invariant under canonical transformations and yet the wavefunctions of quantum mechanics are functions of just half the coordinates of phase space and thus not symplectic representations. A key part of quantum mechanics is that physics cannot depend on the choice of basis of wavefunctions. We can transform between the coordinate and momentum basis and the physics is invariant. In fact, the coordinate and momentum representations are mutually non-local and to move between different bases requires a nonlocal transformation (this is the Fourier transform, in a free" +"---\nabstract: 'It is shown how a mechanism which allows naturally small Dirac neutrino masses is linked to the existence of dark matter through an anomaly-free U(1) gauge symmetry of fermion singlets.'\n---\n\nUCRHEP-T608\\\nJan 2021\n\n[**Linkage of Dirac Neutrinos to\\\nDark U(1) Gauge Symmetry\\\n**]{}\n\n\u00a0:\u00a0 There is a known mechanism since 2001\u00a0[@m01] for obtaining small Dirac fermion masses. It was originally used\u00a0[@m01] in conjucntion with the seesaw mechanism for small Majorana neutrino masses, and later generalized in 2009\u00a0[@glr09]. It has also been applied in 2016\u00a0[@m16] to light quark and lepton masses.\n\nThe idea is very simple. Start with the standard model (SM) of quarks and leptons with just one Higgs scalar doublet $\\Phi = (\\phi^+,\\phi^0)$. Add a second Higgs scalar doublet $\\eta = (\\eta^+,\\eta^0)$ which is distinguished from $\\Phi$ by a symmetry yet to be chosen. Depending on how quarks and leptons transform under this new symmetry, $\\Phi$ and $\\eta$ may couple to different combinations of fermion doublets and singlets. These Yukawa couplings are dimension-four terms of the Lagrangian which must obey this new symmetry.\n\nIn the Higgs sector, this new symmetry is allowed to be broken softly or spontaneously, such that $\\langle \\eta^0" +"---\nabstract: 'We derive the mass-radius relation and mass function of molecular clumps in the Large Magellanic Cloud (LMC) and interpret them in terms of the simple feedback model proposed by Fall, Krumholz, and Matzner (FKM). Our work utilizes the dendrogram-based catalog of clumps compiled by Wong et al. from $^{12}$CO and $^{13}$CO maps of six giant molecular clouds in the LMC observed with the Atacama Large Millimeter Array (ALMA). The Magellanic Clouds are the only external galaxies for which this type of analysis is possible at the necessary spatial resolution ($\\sim1$ pc). We find that the mass-radius relation and mass function of LMC clumps have power-law forms, $R \\propto M^{\\alpha}$ and $dN/dM \\propto M^{\\beta}$, with indices $\\alpha = 0.36 \\pm 0.03$ and $\\beta= -1.8 \\pm 0.1 $ over the mass ranges $10^2 M_\\odot \\la M \\la 10^5 M_\\odot$ and $10^2 M_\\odot \\la M \\la 10^4 M_\\odot$, respectively. With these values of $\\alpha$ and $\\beta$ for the clumps (i.e., protoclusters), the predicted index for the mass function of young LMC clusters from the FKM model is $\\beta \\approx 1.7$, in good agreement with the observed index. The situation portrayed here for clumps and clusters in the LMC replicates that in" +"---\nauthor:\n- 'H. Socas-Navarro'\n- 'A. Asensio Ramos'\nbibliography:\n- 'aanda.bib'\n- 'HSN\\_bib.bib'\ndate: 'Received ; accepted '\ntitle: 'Mapping the Sun\u2019s upper photosphere with artificial neural networks'\n---\n\n[We have developed an inversion procedure designed for high-resolution solar spectro-polarimeters, such as Hinode/SP or DKIST/ViSP. The procedure is based on artificial neural networks trained with profiles generated from random atmospheric stratifications for a high generalization capability. When applied to Hinode data we find a hot fine-scale network structure whose morphology changes with height. In the middle layers this network resembles what is observed in G-band filtergrams but it is not identical. Surprisingly, the temperature enhancements in the middle and upper photosphere have a reversed pattern. Hot pixels in the middle photosphere, possibly associated to small-scale magnetic elements, appear cool at the [$\\log \\tau_{500}$]{}=$-3$ and $-4$ level, and viceversa. Finally, we find hot arcs on the limb side of magnetic pores, which we interpret as the first direct observational evidence of the \u201chot wall\u201d effect in temperature.]{}\n\nIntroduction\n============\n\nInversion techniques allow us to retrieve information encoded in spectral lines about the atmospheres where they form. A wide variety of strategies have been employed for decades in solar physics to" +"---\nabstract: |\n Item response theory (IRT) models typically rely on a normality assumption for subject-specific latent traits, which is often unrealistic in practice. Semiparametric extensions based on Dirichlet process mixtures offer a more flexible representation of the unknown distribution of the latent trait. However, the use of such models in the IRT literature has been extremely limited, in good part because of the lack of comprehensive studies and accessible software tools. This paper provides guidance for practitioners on semiparametric IRT models and their implementation. In particular, we rely on NIMBLE, a flexible software system for hierarchical models that enables the use of Dirichlet process mixtures. We highlight efficient sampling strategies for model estimation and compare inferential results under parametric and semiparametric models.\n\n *Keywords*: binary IRT models, Dirichlet process mixture, MCMC strategies, NIMBLE.\nauthor:\n- 'Sally Paganin [^1]'\n- 'Christopher J. Paciorek'\n- Claudia Wehrhahn\n- 'Abel Rodr[\u00ed]{}guez'\n- 'Sophia Rabe-Hesketh'\n- Perry de Valpine\nbibliography:\n- 'bibliography.bib'\ntitle: Computational strategies and estimation performance with Bayesian semiparametric Item Response Theory models\n---\n\nIntroduction\n============\n\nTraditional approaches in item response theory (IRT) modeling rely on the assumption that subject-specific latent traits follow a normal distribution. This assumption is often considered for" +"---\nabstract: 'We investigate the repeated prisoner\u2019s dilemma game where both players alternately use reinforcement learning to obtain their optimal memory-one strategies. We theoretically solve the simultaneous Bellman optimality equations of reinforcement learning. We find that the Win-stay Lose-shift strategy, the Grim strategy, and the strategy which always defects can form symmetric equilibrium of the mutual reinforcement learning process amongst all deterministic memory-one strategies.'\naddress:\n- 'Faculty of Science, Yamaguchi University, Yamaguchi 753-8511, Japan'\n- 'Graduate School of Sciences and Technology for Innovation, Yamaguchi University, Yamaguchi 753-8511, Japan'\nauthor:\n- Yuki Usui\n- Masahiko Ueda\nbibliography:\n- 'RL\\_RPD.bib'\ntitle: 'Symmetric equilibrium of multi-agent reinforcement learning in repeated prisoner\u2019s dilemma'\n---\n\nRepeated prisoner\u2019s dilemma game; Reinforcement learning\n\nIntroduction {#sec:introduction}\n============\n\nThe prisoner\u2019s dilemma game describes a dilemma where rational behavior of each player cannot achieve a favorable situation for both players [@RCO1965]. In the game, each player chooses cooperation or defection. Each player can obtain more payoff by taking defection than by taking cooperation regardless of the opponent\u2019s action. Then, mutual defection is realized as a result of rational thought of both players, while payoffs of both players increase when both players choose cooperation. Although the Nash equilibrium of the one-shot" +"---\nabstract: 'Heterogeneous multi-task learning (HMTL) is an important topic in multi-task learning (MTL). Most existing HMTL methods usually solve either scenario where all tasks reside in the same input (feature) space yet unnecessarily the consistent output (label) space or scenario where their input (feature) spaces are heterogeneous while the output (label) space is consistent. However, to the best of our knowledge, there is limited study on twofold heterogeneous MTL (THMTL) scenario where the input and the output spaces are both inconsistent or heterogeneous. In order to handle this complicated scenario, in this paper, we design a simple and effective multi-task adaptive learning (MTAL) network to learn multiple tasks in such THMTL setting. Specifically, we explore and utilize the inherent relationship between tasks for knowledge sharing from similar convolution kernels in individual layers of the MTAL network. Then in order to realize the sharing, we weightedly aggregate any pair of convolutional kernels with their similarity greater than some threshold $\\rho$, consequently, our model effectively performs cross-task learning while suppresses the intra-redundancy of the entire network. Finally, we conduct end-to-end training. Our experimental results demonstrate the effectiveness of our method in comparison with the state-of-the-art counterparts.'\naddress:\n- 'College of Computer" +"---\nabstract: 'Social media became popular and percolated almost all aspects of our daily lives. While online posting proves very convenient for individual users, it also fosters fast-spreading of various rumors. The rapid and wide percolation of rumors can cause persistent adverse or detrimental impacts. Therefore, researchers invest great efforts on reducing the negative impacts of rumors. Towards this end, the rumor classification system aims to detect, track, and verify rumors in social media. Such systems typically include four components: (i) a rumor detector, (ii) a rumor tracker, (iii) a stance classifier, and (iv) a veracity classifier. In order to improve the state-of-the-art in rumor detection, tracking, and verification, we propose VRoC, a tweet-level variational autoencoder-based rumor classification system. VRoC consists of a co-train engine that trains variational autoencoders (VAEs) and rumor classification components. The co-train engine helps the VAEs to tune their latent representations to be classifier-friendly. We also show that VRoC is able to classify unseen rumors with high levels of accuracy. For the PHEME dataset, VRoC consistently outperforms several state-of-the-art techniques, on both observed and unobserved rumors, by up to $26.9\\%$, in terms of macro-F1 scores.[^1]'\nauthor:\n- Mingxi Cheng\n- Shahin Nazarian\n- Paul Bogdan\nbibliography:" +"---\nabstract: 'We provide a generalization of the McMullen\u2019s algorithm to approximate the Hausdorff dimension of the limit set for convex-cocompact subgroups of isometries of the Complex Hyperbolic Plane.'\naddress:\n- 'Instituto de Matem\u00e1tica, Universidade Federal do Rio de Janeiro, Cidade Universit\u00e1ria - Ilha do Fund\u00e3o, Rio de Janeiro 21941-909, Brazil'\n- ' Institut de Math\u00e9matiques, Universit\u00e9 Pierre et Marie Curie, 4 Place Jussieu, F-75252, Paris, France.'\nauthor:\n- Sergio Roma\u00f1a\n- 'Alejandro Ucan-Puc'\nbibliography:\n- 'McMullenAlgorithmGeneralization.bib'\ntitle: \n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nThe Hausdorff dimension is a bi-Lipschitz invariant, and in the case of the Kleinian groups allows us to understand what kind of fractal spaces can be a limit set of Kleinian groups. In 1998, McMullen ([@Mcmullen1998]) proposed an algorithm to approximates the Hausdorff dimension of a set associated with a conformal dynamical system (as Julia sets or limit set of a geometrically finite Kleinian groups). Naively, McMullen\u2019s algorithm works as follows:\n\n1. For a Markov partition of the dynamical system. We compute the transition matrix T using the data of the dynamical system.\n\n2. We solve for $\\alpha$ such that the spectral radius of $T^\\alpha$ is 1. The matrix $T^\\alpha$ is equal to the matrix where each" +"---\nabstract: 'Animals locomote robustly and agile, albeit significant sensorimotor delays of their nervous system. The sensorimotor control of legged robots is implemented with much higher frequencies\u2014 often in the kilohertz range\u2014and sensor and actuator delays in the low millisecond range. But especially at harsh impacts with unknown touch-down timing, legged robots show unstable controller behaviors, while animals are seemingly not impacted. Here we examine this discrepancy and suggest a hybrid robotic leg and controller design. We implemented a physical, parallel joint compliance dimensioned in combination with an active, virtual leg length controller. We present an extensive set of systematic experiments both in computer simulation and hardware. Our hybrid leg and controller design shows previously unseen robustness, in the presence of sensorimotor delays up to 60 ms, or control frequencies as low as 20 Hz, for a drop landing task from 1.3 leg lengths high and with a passive compliance ratio of 0.7. In computer simulations, we report successful drop-landings of the hybrid compliant leg from 3.8 leg lengths (1.2 m) for a 2 kg quadruped robot with 100 Hz control frequency and a sensorimotor delay of 35 ms. The results of our presented hybrid leg design and control provide" +"---\nauthor:\n- 'Amin\u00a0Shahraki^\\*^,\u00a0 Mahmoud\u00a0Abbasi,\u00a0 \u00a0Md.\u00a0Jalil Piran^\\*^,\u00a0\u00a0and\u00a0 Amir Taherkordi [^1] [^2] [^3] [^4][^5]'\nbibliography:\n- 'Main.bib'\ntitle: 'A Comprehensive Survey on 6G Networks: Applications, Core Services, Enabling Technologies, and Future Challenges'\n---\n\n[Abbasi : 6G Wireless Networks]{}\n\n\\\n\nInternet of Things (IoT), 6G, 5G, uRLLC, THz, Tactile Internet, cellular IoT.\n\nIntroduction {#Introduction}\n============\n\nHowever, By taking the present-day and emerging advancements of wireless communications into account, 5G may not meet the future demands for the following reasons:\n\n- - - \n\nTo deal with the challenges mentioned above, 6G networks are expected to provide new service classes, use new spectrum for wireless communications, enormous network capacity, ultra-low latency communications, and adopt novel energy-efficient transmission methods [@yang20196g].\n\nWe aim to present a comprehensive survey on 6G cellular networks by considering a wide ranges of 6G aspects. Our contributions can be summarized, but not limited, as follows. The rest of this paper is organized as follows. Section \\[Related Works\\] reviews the related survey and magazine articles on 6G. In Section \\[requirements\\], we present the requirements and trends of 6G. The research activities and motivation are discussed in Section \\[section:research activities\\]. Moreover, in this section, we provide a comprehensive list" +"---\nabstract: 'During the current COVID-19 pandemic decision makers are tasked with implementing and evaluating strategies for both treatment and disease prevention. In order to make effective decisions they need to simultaneously monitor various attributes of the pandemic such as transmission rate and infection rate for disease prevention, recovery rate which indicates treatment effectiveness as well as the mortality rate and others. This work presents a technique for monitoring the pandemic by employing an Susceptible, Exposed, Infected, Recovered Death model regularly estimated by an augmented particle Markov chain Monte Carlo scheme in which the posterior distribution samples are monitored via Multivariate Exponentially Weighted Average process monitoring. This is illustrated on the COVID-19 data for the State of Qatar.'\nauthor:\n- \ntitle: 'Monitoring SEIRD model parameters using MEWMA for the COVID-19 pandemic with application to the State of Qatar.'\n---\n\nEpidemiology; Augmented particle Markov chain Monte Carlo; Multivariate Exponentially Weighted Moving Average; process monitoring; COVID-19\n\nIntroduction {#sec:Intro}\n============\n\nCoronavirus Disease 2019 (COVID-19) [@Wu; @Rezabakhsh] is a severe pandemic affecting the whole world with a fast spreading regime, requiring to perform strict precautions to keep it under control. As there are limited cure and target treatment at the moment, establishing those precautions" +"---\nabstract: |\n There has been a rapid development in data-driven task-oriented dialogue systems with the benefit of large-scale datasets. However, the progress of dialogue systems in low-resource languages lags far behind due to the lack of high-quality data. To advance the cross-lingual technology in building dialog systems, DSTC9 introduces the task of cross-lingual dialog state tracking, where we test the DST module in a low-resource language given the rich-resource training dataset.\n\n This paper studies the transferability of a cross-lingual generative dialogue state tracking system using a multilingual pre-trained seq2seq model. We experiment under different settings, including joint-training or pre-training on cross-lingual and cross-ontology datasets. We also find out the low cross-lingual transferability of our approaches and provides investigation and discussion.\nauthor:\n- |\n Written by AAAI Press Staff^1^[^1]\\\n AAAI Style Contributions by Pater Patel Schneider, Sunil Issar,\\\n J. Scott Penberthy, George Ferguson, Hans Guesgen, Francisco Cruz, Marc Pujol-Gonzalez\\\n- 'Yen-Ting Lin, Yun-Nung Chen\\'\nbibliography:\n- 'LaTeX/mybib.bib'\n- 'LaTeX/mendeley.bib'\ntitle:\n- |\n AAAI Press Formatting Instructions\\\n for Authors Using LaTeX \u2014 A Guide \n- 'An Empirical Study of Cross-Lingual Transferability in Generative Dialogue State Tracker'\n---\n\nIntroduction\n============\n\nDialogue state tracking is one of the essential building blocks in the" +"---\nabstract: 'In this paper, the supervisory control of a Discrete Event System (DES) analyses states and events to construct autonomous package delivery system. The delivery system includes legged robot in order to autonomously navigate uneven indoor terrain and a conveyor belt for transporting the package to the legged robot. The aim of the paper is using theory of supervisory control of DES to supervise and control machine\u2019s state and event and ensure robots autonomously collaborate. By applying the theory, we show collaboration of two individual robots to deliver goods in multi-floor environment The obtained results from the theory of supervisory control is implemented and verified in simulation environment.'\nauthor:\n- 'Garen Haddeler$^{*}$[^1]'\ntitle: '**The Analysis of Discrete-Event System in Autonomous Package Delivery using Legged Robot and Conveyor Belt** '\n---\n\nINTRODUCTION\n============\n\nDelivering package in indoor and uneven terrain can be challenging since today\u2019s robot cannot fully represent and navigate multi-storey terrain. Comparing with wheeled robots, legged robots can be used to navigate uneven terrains since legged robots can overcome larger obstacles than their body frame [@bosworth_kim_hogan_2015]. Inspired by the capability of such robots, we developed an autonomous navigation framework for the legged robot to fulfil desired behaviour which" +"---\nabstract: 'We study the ground state (GS) many-body quantum entanglement of two different transverse field models on a quasi-2D square lattice relevant to a Hydrogen-bonded crystal, i.e, squaric acid. We measure the genuine multipartite qubit-entanglement ($C_{\\text{GME}}(\\psi)$) of the ground state of very generic models with all the possible cases of exchange couplings considered under defect free and one lattice site defect conditions. Our results show that creation, decay of multipartite entanglement occur for different combinations of coupling strength. When frustration is maximum the system exhibits a peak in concurrence after a gradual increase from disentangled state at zero field followed by an asymptotic decay at large fields. In contrast, for a marginally frustrated (degenerate) case though the concurrence shows a peak, the entanglement is non-zero and large at zero fields. Our results discuss the sensitivity of the qubit-entanglement with underlying GS of varying degree of degeneracy. We conclude that despite of their similarities in ground state properties, yet we see a difference in the degree of entanglement between the two models. We conjecture this result could be due to the difference in the amount of degeneracy and the quantum ground states of both Hamiltonians that could dictate the results" +"---\nabstract: 'Emergency Departments (EDs) overcrowding is a well recognized worldwide phenomenon. The consequences range from long waiting times for visit and treatment of patients, up to life-threatening health conditions. The international community is devoting greater and greater efforts to analyze this phenomenon aiming at reducing waiting times, improving the quality of the service. Within this framework, we propose a Discrete Event Simulation (DES) model to study the patient flows through a medium\u2013size ED located in a region of Central Italy recently hit by a severe earthquake. In particular, our aim is to simulate unusual ED conditions, corresponding to critical events (like a natural disaster) which cause a sudden spike in the number of patient arrivals. The availability of detailed data concerning the ED processes enabled to build an accurate DES model and to perform extensive scenario analyses. The model provides a valid decision support system for the ED managers also in defining specific emergency plans to be activated in case of mass casualty disasters.'\nauthor:\n- Giordano Fava\n- 'Tommaso Giovannelli\u00a0[![image](orcid.png)](https://orcid.org/0000-0002-1436-5348)'\n- Mauro Messedaglia\n- 'Massimo Roma\u00a0[![image](orcid.png)](https://orcid.org/0000-0002-9858-3616)'\ntitle: Effect of different patient peak arrivals on an Emergency Department via discrete event simulation\n---\n\n[example.eps]{} gsave newpath 20" +"---\nabstract: 'The notion of the Gamma integral structure for the quantum cohomology of an algebraic variety was introduced by Iritani, Katzarkov\u2013Kontsevich\u2013Pantev. In this paper, we define the Gamma integral structure for an invertible polynomial of chain type. Based on the $\\Gamma$-conjecture by Iritani, we prove that the Gamma integral structure is identified with the natural integral structure for the Berglund\u2013H\u00fcbsch transposed polynomial by the mirror isomorphism.'\naddress:\n- 'Department of Mathematics, Graduate School of Science, Osaka University, Toyonaka Osaka, 560-0043, Japan'\n- 'Department of Mathematics, Graduate School of Science, Osaka University, Toyonaka Osaka, 560-0043, Japan'\nauthor:\n- Takumi Otani\n- Atsushi Takahashi\ntitle: Gamma integral structure for an invertible polynomial of chain type\n---\n\nIntroduction\n============\n\nFor a holomorphic function $f:\\CC^n\\longrightarrow\\CC$ with at most an isolated critical point at the origin, K. Saito developed a theory of primitive forms which yields a Frobenius structure on the base space of the universal unfolding [@S-K; @ST]. In order further to study the (exponential) periods of a primitive form, one needs careful analysis of the structure connections of the Frobenius manifold, especially, their integral structures. A Frobenius manifold can be associated to a smooth projective variety (or orbifold) $X$ by the genus" +"---\nabstract: 'We present a new efficient transition pathway search method based on the least action principle and the Gaussian process regression method. Most pathway search methods developed so far rely on string representations, which approximate a transition pathway by a series of slowly varying system replicas. Such string methods are computationally expensive in general because they require many replicas to obtain smooth pathways. Here, we present an approach employing the Gaussian process regression method, which infers the shape of a potential energy surface with a few observed data and Gaussian-shaped kernel functions. We demonstrate a drastic elevation of computing efficiency of the method about five orders of magnitude than existing methods. Further, to demonstrate its real-world capabilities, we apply our method to find multiple conformational transition pathways of alanine dipeptide using a quantum mechanical potential. Owing to the improved efficiency of our method, Gaussian process action optimization (GPAO), we obtain the multiple transition pathways of alanine dipeptide and calculate their transition probabilities successfully with *ab initio* accuracy. In addition, GPAO successfully finds the isomerization pathways of small molecules and the rearrangement of atoms on a metallic surface.'\nauthor:\n- '$^1$[^1],'\n- '$^2$[^2],'\n- '$^1$[^3]'\nbibliography:\n- 'main.bib'\ndate: |" +"---\nauthor:\n- Souradeep Bhattacharya\n- Magda Arnaboldi\n- Ortwin Gerhard\n- Alan McConnachie\n- Nelson Caldwell\n- Johanna Hartke\n- 'Kenneth C. Freeman'\nbibliography:\n- 'ref\\_pne.bib'\ndate: 'Submitted: May, 2020; Accepted: January, 2021'\nsubtitle: 'III. Constraints from deep planetary nebula luminosity functions on the origin of the inner halo substructures in M\u00a031'\ntitle: 'The survey of planetary nebulae in Andromeda (M\u00a031)'\n---\n\n[The Andromeda (M\u00a031) galaxy displays several substructures in its inner halo. Different simulations associate their origin with either a single relatively massive merger, or with a larger number of distinct, less massive accretions.]{} [ The origin of these substructures as remnants of accreted satellites or perturbations of the pre-existing disc would be encoded in the properties of their stellar populations (SPs). The metallicity and star formation history of these distinct populations leave traces on their deep \\[\\] 5007$\\AA$ planetary nebulae luminosity function (PNLF). By characterizing the morphology of the PNLFs, we constrain their origin.]{} [From our 54 sq. deg. deep narrow band \\[\\] survey of M\u00a031, we identify planetary nebulae (PNe) in six major inner-halo substructures \u2013 the Giant Stream, North East Shelf, G1-Clump, Northern Clump, Western Shelf and Stream-D. We obtain their" +"---\nabstract: 'Computational modelling of political discourse tasks has become an increasingly important area of research in natural language processing. Populist rhetoric has risen across the political sphere in recent years; however, computational approaches to it have been scarce due to its complex nature. In this paper, we present the new *Us vs. Them* dataset, consisting of 6861 Reddit comments annotated for populist attitudes and the first large-scale computational models of this phenomenon. We investigate the relationship between populist mindsets and social groups, as well as a range of emotions typically associated with these. We set a baseline for two tasks related to populist attitudes and present a set of multi-task learning models that leverage and demonstrate the importance of emotion and group identification as auxiliary tasks.'\nauthor:\n- |\n Pere-Llu\u00eds Huguet Cabot$^{1,3}$, David Abadi$^2$, **Agneta Fischer$^2$, Ekaterina Shutova$^1$**\\\n $^1$ Institute for Logic, Language and Computation, University of Amsterdam\\\n $^2$ Department of Psychology, University of Amsterdam\\\n $^3$ Babelscape Srl, Sapienza University of Rome\\\n `perelluis1993@gmail.com`\\\n `{d.r.abadi, A.H.Fischer, e.shutova}@uva.nl`\nbibliography:\n- 'anthology.bib'\n- 'eacl2021.bib'\ntitle: '*Us vs. Them*: A Dataset of Populist Attitudes, News Bias and Emotions'\n---\n\nIntroduction\n============\n\nPolitical discourse is essential in shaping public opinion. The tasks related to" +"---\nabstract: 'In the contest of open quantum systems, we study a class of Kraus operators whose definition relies on the defining representation of the symmetric groups. We analyze the induced orbits as well as the limit set and the degenerate cases.'\nauthor:\n- 'Alessia Cattabriga[^1], Elisa Ercolessi[^2], Riccardo Gozzi, Erika Meucci'\ntitle: Kraus operators and symmetric groups\n---\n\nIntroduction and preliminaries\n==============================\n\nWe are interested in studying open quantum systems, that is systems that are free to interact with the environment or with other systems. The study of open systems is useful in fields such as quantum optics, quantum measurement theory, quantum statistical mechanics and quantum cosmology. Moreover, the study of composite systems is at the heart of quantum computation and quantum information, where, for examples, concepts like entanglement can have applications in devising algorithms and protocols, such as quantum teleportation, that do not have a classical analogue.\n\nIn elementary quantum mechanics, the state of a closed quantum system is represented by a ray [@EMM] in a separable Hilbert space $\\mathbf{v} \\in {\\cal H}$, i.e. by an equivalence class of vectors $[ \\mathbf{v} ]$, $\\mathbf{v} \\in {\\cal H}$, with respect to the relation: $\\mathbf{v} \\sim \\lambda \\mathbf{v} $ with" +"---\nabstract: 'Quantum technology is approaching a level of maturity, recently demonstrated in space-borne experiments and in-field measurements, which would allow for adoption by non-specialist users. Parallel advancements made in microprocessor-based electronics and database software can be combined to create robust, versatile and modular experimental monitoring systems. Here, we describe a monitoring network used across a number of cold atom laboratories with a shared laser system. The ability to diagnose malfunction, unexpected or unintended behaviour and passively collect data for key experimental parameters, such as vacuum chamber pressure, laser beam power, or resistances of important conductors, significantly reduces debugging time. This allows for efficient control over a number of experiments and remote control when access is limited.'\naddress:\n- '$^1$ Department of Physics and Astronomy, University of Sussex, Brighton BN1 9QH, UK'\n- '$^2$ Department of Physics of Complex Systems, Weizmann Institute of Science, Rehovot 761001, Israel'\nauthor:\n- 'T J Barrett$^1$, W Evans$^1$, A Gadge$^{1,2}$, S Bhumbra$^1$, S Sleegers$^1$, R Shah$^1$, J Fekete$^1$, F\u00a0Oru\u010devi\u0107$^1$ and P\u00a0Kr\u00fcger$^1$'\nbibliography:\n- 'Bibliography.bib'\ntitle: An Environmental Monitoring Network for Quantum Gas Experiments and Devices\n---\n\n#### \n\nRecent developments in quantum technologies that exploit the unique properties of cold atomic clouds, such" +"---\nabstract: 'Solar flares and plasma eruptions are sudden releases of magnetic energy stored in the plasma atmosphere. To understand the physical mechanisms governing their occurrences, three-dimensional magnetic fields from the photosphere up to the corona must be studied. The solar photospheric magnetic fields are observable, whereas the coronal magnetic fields cannot be measured. One method for inferring coronal magnetic fields is performing data-driven simulations, which involves time-series observational data of the photospheric magnetic fields with the bottom boundary of magnetohydrodynamic simulations. We developed a data-driven method in which temporal evolutions of the observational vector magnetic field can be reproduced at the bottom boundary in the simulation by introducing an inverted velocity field. This velocity field is obtained by inversely solving the induction equation and applying an appropriate gauge transformation. Using this method, we performed a data-driven simulation of successive small eruptions observed by the Solar Dynamics Observatory and the Solar Magnetic Activity Telescope in November 2017. The simulation well reproduced the converging motion between opposite-polarity magnetic patches, demonstrating successive formation and eruptions of helical flux ropes.'\nauthor:\n- Takafumi Kaneko\n- 'Sung-Hong Park'\n- Kanya Kusano\ntitle: 'Data-driven MHD simulation of successive solar plasma eruptions'\n---\n\nIntroduction {#sec:intro}\n============" +"---\nabstract: 'We investigate topological signatures in the short-time non-equilibrium dynamics of symmetry protected topological (SPT) systems starting from initial states which break a protecting symmetry. Na\u00efvely, one might expect that topology loses meaning when a protecting symmetry is broken. Defying this intuition, we illustrate, in an interacting Su-Schrieffer-Heeger (SSH) model, how this combination of symmetry breaking and quench dynamics can give rise to both single-particle and many-body signatures of topology. From the dynamics of the symmetry broken state, we find that we are able to dynamically probe the equilibrium topological phase diagram of a symmetry respecting projection of the post-quench Hamiltonian. In the ensemble dynamics, we demonstrate how spontaneous symmetry breaking (SSB) of a protecting symmetry can result in a quantized many-body topological \u2018invariant\u2019 which is not pinned under unitary time evolution. We dub this \u2018dynamical many-body topology\u2019 (DMBT). We show numerically that both the pure state and ensemble signatures are remarkably robust, and argue that these non-equilibrium signatures should be quite generic in SPT systems, regardless of protecting symmetries or spatial dimension.'\nauthor:\n- 'Jacob A. Marks'\n- Michael Sch\u00fcler\n- 'Thomas P. Devereaux'\ntitle: Dynamical signatures of symmetry protected topology following symmetry breaking\n---\n\nIntroduction\n============\n\nOut-of-equilibrium" +"---\nabstract: 'Fine-grained visual classification is a challenging task that recognizes the sub-classes belonging to the same meta-class. Large inter-class similarity and intra-class variance is the main challenge of this task. Most exiting methods try to solve this problem by designing complex model structures to explore more minute and discriminative regions. In this paper, we argue that mining multi-regional multi-grained features is precisely the key to this task. Specifically, we introduce a new loss function, termed top-down spatial attention loss (TDSA-Loss), which contains a multi-stage channel constrained module and a top-down spatial attention module. The multi-stage channel constrained module aims to make the feature channels in different stages category-aligned. Meanwhile, the top-down spatial attention module uses the attention map generated by high-level aligned feature channels to make middle-level aligned feature channels to focus on particular regions. Finally, we can obtain multiple discriminative regions on high-level feature channels and obtain multiple more minute regions within these discriminative regions on middle-level feature channels. In summary, we obtain multi-regional multi-grained features. Experimental results over four widely used fine-grained image classification datasets demonstrate the effectiveness of the proposed method. Ablative studies further show the superiority of two modules in the proposed method. [Codes are" +"---\nabstract: 'To obtain precise motion control of wafer stages, an adaptive neural network and fractional-order super-twisting control strategy is proposed. Based on sliding mode control (SMC), the proposed controller aims to address two challenges in SMC: 1) reducing the chattering phenomenon, and 2) attenuating the influence of model uncertainties and disturbances. For the first challenge, a fractional-order terminal sliding mode surface and a super-twisting algorithm are integrated into the SMC design. To attenuate uncertainties and disturbances, an add-on control structure based on the radial basis function (RBF) neural network is introduced. Stability analysis of the closed-loop control system is provided. Finally, experiments on a wafer stage testbed system are conducted, which proves that the proposed controller can robustly improve the tracking performance in the presence of uncertainties and disturbances compared to conventional and previous controllers.'\naddress:\n- 'Research Institute of Intelligent Control and Systems, Harbin Institute of Technology, Harbin 150001, P.R. China (e-mail: zhiankuang@foxmail.com)'\n- 'Mechanical Control System Lab, Mechanical Engineering Department, University of California, Berkeley, CA 94720, USA (e-mail:tomizuka@berkeley.edu)'\nauthor:\n- Zhian Kuang\n- Liting Sun\n- Huijun Gao\n- Masayoshi Tomizuka\nbibliography:\n- 'ifacconf.bib'\ntitle: 'Precise Motion Control of Wafer Stages via Adaptive Neural Network and Fractional-Order" +"---\nabstract: 'Finding good correspondences is a critical prerequisite in many feature based tasks. Given a putative correspondence set of an image pair, we propose a neural network which finds correct correspondences by a binary-class classifier and estimates relative pose through classified correspondences. First, we analyze that due to the imbalance in the number of correct and wrong correspondences, the loss function has a great impact on the classification results. Thus, we propose a new Guided Loss that can directly use evaluation criterion (Fn-measure) as guidance to dynamically adjust the objective function during training. We theoretically prove that the perfect negative correlation between the Guided Loss and Fn-measure, so that the network is always trained towards the direction of increasing Fn-measure to maximize it. We then propose a hybrid attention block to extract feature, which integrates the Bayesian attentive context normalization (BACN) and channel-wise attention (CA). BACN can mine the prior information to better exploit global context and CA can capture complex channel context to enhance the channel awareness of the network. Finally, based on our Guided Loss and hybrid attention block, a cascade network is designed to gradually optimize the result for more superior performance. Experiments have shown that" +"---\nabstract: 'The current interests in the universe motivate us to go beyond Einstein\u2019s General theory of relativity. One of the interesting proposals comes from a new class of teleparallel gravity named symmetric teleparallel gravity, i.e., $f(Q)$ gravity, where the non-metricity term $Q$ is accountable for fundamental interaction. These alternative modified theories of gravity\u2019s vital role are to deal with the recent interests and to present a realistic cosmological model. This manuscript\u2019s main objective is to study the traversable wormhole geometries in $f(Q)$ gravity. We construct the wormhole geometries for three cases: (i) by assuming a relation between the radial and lateral pressure, (ii) considering phantom energy equation of state (EoS), and (iii) for a specific shape function in the fundamental interaction of gravity (i.e. for linear form of $f(Q)$). Besides, we discuss two wormhole geometries for a general case of $f(Q)$ with two specific shape functions. Then, we discuss the viability of shape functions and the stability analysis of the wormhole solutions for each case. We have found that the null energy condition (NEC) violates each wormhole model which concluded that our outcomes are realistic and stable. Finally, we discuss the embedding diagrams and volume integral quantifier to have" +"---\nauthor:\n- som\n- Peter Reimitz\nbibliography:\n- 'literature.bib'\ntitle: 'MeV astronomy with Herwig?'\n---\n\nIntroduction\n============\n\nAstrophysical and cosmological observations have shown that the bulk of the energy budget in the Universe is made out of dark matter (DM) and dark energy\u00a0[@Ade:2015xua]. Driven by the tremendous success of the Standard Model (SM) of particle physics and standard cosmology after Big Bang, it seems inevitable to look for a fundamental description of dark matter in terms of a quantum field theory. Any SM extension via operators coupling SM degrees of freedom to dark sector fields brings along a variety of ways to search for DM\u00a0[@Lin:2019uvt].\n\nWhile indirect and direct detection as well as collider searches have set stringent constraints on weak-scale DM\u00a0[@Arcadi:2017kky], processes in the sub-GeV range are comparably unexplored by standard searches. Alongside strong efforts in direct detection searches\u00a0[@Agnes:2018ves; @Aprile:2016wwo; @Essig:2017kqs], one of the leading constraints in the MeV to GeV range is expected to be set by indirect detection\u00a0[@Leane:2018kjk]. Besides the existing GeV-scale gamma-ray and cosmic-ray observatories\u00a0[@Atwood:2009ez; @Aleksic:2014lkm; @Abramowski:2014tra; @Holder:2006gi; @Aguilar:2016vqr], several proposed MeV gamma-ray telescopes such as e-Astrogam\u00a0[@DeAngelis:2017gra] and AMEGO\u00a0[@McEnery:2019tcm] and concept telescopes\u00a0[@Moiseev:2015lva; @Duncan:2016zbd;" +"---\nabstract: 'This paper describes an analysis of the [[*NuSTAR*]{}]{}\u00a0data of the fastest-rotating magnetar [1E\u00a01547$-$5408]{}, acquired in 2016 April for a time lapse of 151 ks. The source was detected with a 1\u201360 keV flux of $1.7 \\times 10^{-11}$ ergs s$^{-1}$ cm$^{-2}$, and its pulsation at a period of $2.086710(5)$ sec. In 8\u201325 keV, the pulses were phase-modulated with a period of $T=36.0 \\pm 2.3$ ks, and an amplitude of $\\sim 0.2$ sec. This reconfirms the [[*Suzaku*]{}]{}\u00a0discovery of the same effect at $T=36.0 ^{+4.5}_{-2.5} $ ks, made in the 2009 outburst. These results strengthen the view derived from the [[*Suzaku*]{}]{}\u00a0data, that this magnetar performs free precession as a result of its axial deformation by $\\sim 0.6 \\times 10^{-4}$, possibly caused by internal toroidal magnetic fields reaching $\\sim 10^{16}$ G. Like in the [[*Suzaku*]{}]{}\u00a0case, the modulation was not detected in energies below $\\sim 8$ keV. Above 10 keV, the pulse-phase behaviour, including the 36 ks modulation parameters, exhibited complex energy dependences: at $\\sim 22$ keV, the modulation amplitude increased to $\\sim 0.5$ sec, and the modulation phase changed by $\\sim 65^\\circ$ over 10\u201327 keV, followed by a phase reversal. Although the pulse significance and pulsed fraction" +"---\nabstract: 'Coronavirus disease 2019 (COVID-19) has emerged the need for computer-aided diagnosis with automatic, accurate, and fast algorithms. Recent studies have applied Machine Learning algorithms for COVID-19 diagnosis over chest X-ray (CXR) images. However, the data scarcity in these studies prevents a reliable evaluation with the potential of overfitting and limits the performance of deep networks. Moreover, these networks can discriminate COVID-19 pneumonia usually from healthy subjects only or occasionally, from limited pneumonia types. Thus, there is a need for a robust and accurate COVID-19 detector evaluated over a large CXR dataset. To address this need, in this study, we propose a reliable COVID-19 detection network: ReCovNet, which can discriminate COVID-19 pneumonia from 14 different thoracic diseases and healthy subjects. To accomplish this, we have compiled the largest COVID-19 CXR dataset: QaTa-COV19 with 124,616 images including 4603 COVID-19 samples. The proposed ReCovNet achieved a detection performance with 98.57% sensitivity and 99.77% specificity.'\naddress: |\n $^{\\dagger}$ Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland\\\n $^{\\ast}$ Department of Electrical Engineering, Qatar University, Doha, Qatar\nbibliography:\n- 'refs.bib'\ntitle: 'Reliable COVID-19 Detection using Chest X-Ray Images'\n---\n\nSARS-CoV-2, COVID-19 Detection, Machine Learning, Deep Learning\n\nIntroduction\n============\n\nCoronavirus disease 2019" +"---\nabstract: 'Based on the photometric redshift catalog of @zou19, we apply a fast clustering algorithm to identify 540,432 galaxy clusters at $z\\lesssim1$ in the DESI legacy imaging surveys, which cover a sky area of about 20,000 deg$^2$. Monte-Carlo simulations indicate that the false detection rate of our detecting method is about 3.1%. The total masses of galaxy clusters are derived using a calibrated richness\u2013mass relation that are based on the observations of X-ray emission and Sunyaev & Zel\u2019dovich effect. The median redshift and mass of our detected clusters are about 0.53 and $1.23\\times10^{14} M_\\odot$, respectively. Comparing with previous clusters identified using the data of the Sloan Digital Sky Survey (SDSS), we can recognize most of them, especially those with high richness. Our catalog will be used for further statistical studies on galaxy clusters and environmental effects on the galaxy evolution, etc.'\nauthor:\n- Hu Zou\n- Jinghua Gao\n- Xin Xu\n- Xu Zhou\n- Jun Ma\n- Zhimin Zhou\n- Tianmeng Zhang\n- Jundan Nie\n- Jiali Wang\n- Suijian Xue\ntitle: 'Galaxy Clusters from the DESI Legacy Imaging Surveys. I. Cluster Detection'\n---\n\nIntroduction\n============\n\nGalaxy clusters are the most massive gravitationally bound systems in the universe." +"---\nabstract: 'The energy minimization involved in density functional calculations of electronic systems can be carried out using an exponential transformation that preserves the orthonormality of the orbitals. The energy of the system is then represented as a function of the elements of a skew-Hermitian matrix that can be optimized directly using unconstrained minimization methods. An implementation based on the limited memory Broyden-Fletcher-Goldfarb-Shanno approach with inexact line search and a preconditioner is presented and the performance compared with that of the commonly used self-consistent field approach. Results are presented for the G2 set of 148 molecules, liquid water configurations with up to 576 molecules and some insulating crystals. A general preconditioner is presented that is applicable to systems with fractional orbital occupation as is, for example, needed in the k-point sampling for periodic systems. This exponential transformation direct minimization approach is found to outperform the standard implementation of the self-consistent field approach in that all the calculations converge with the same set of parameter values and it requires less computational effort on average. The formulation of the exponential transformation and the gradients of the energy presented here are quite general and can be applied to energy functionals that are not" +"---\nabstract: 'The bistatic backscatter architecture, with its extended range, enables flexible deployment opportunities for backscatter devices. In this paper, we study the placement of power beacons (PBs) in bistatic backscatter networks to maximize the guaranteed coverage distance (GCD), defined as the distance from the reader within which backscatter devices are able to satisfy a given quality-of-service constraint. This work departs from conventional energy source placement problems by considering the performance of the additional backscatter link on top of the energy transfer link. We adopt and optimize a symmetric PB placement scheme to maximize the GCD. The optimal PB placement under this scheme is obtained using either analytically tractable expressions or an efficient algorithm. Numerical results provide useful insights into the impacts of various system parameters on the PB placement and the resulting GCD, plus the advantages of the adopted symmetric placement scheme over other benchmark schemes.'\nauthor:\n- 'Xiaolun\u00a0Jia,\u00a0 and\u00a0Xiangyun\u00a0Zhou,\u00a0 [^1]'\nbibliography:\n- 'IEEEabrv.bib'\n- 'biplace\\_ref.bib'\ntitle: Power Beacon Placement for Maximizing Guaranteed Coverage in Bistatic Backscatter Networks\n---\n\nBistatic backscatter communication, coverage, placement optimization, power beacons.\n\nIntroduction\n============\n\ncommunication has emerged as a promising technology to improve device lifetime. Rather than transmitting signals using active" +"---\nabstract: 'This letter presents a combined measurement of the energy spectra of atmospheric $\\nu_e$ and $\\nu_\\mu$ in the energy range between $\\sim$100 GeV and $\\sim$50 TeV with the ANTARES neutrino telescope. The analysis uses 3012 days of detector livetime in the period 2007\u20132017, and selects 1016 neutrinos interacting in (or close to) the instrumented volume of the detector, yielding *shower-like* events (mainly from $\\nu_e+\\overline \\nu_e$ charged current plus all neutrino neutral current interactions) and *starting track* events (mainly from $\\nu_\\mu + \\overline \\nu_\\mu$ charged current interactions). The contamination by atmospheric muons in the final sample is suppressed at the level of a few per mill by different steps in the selection analysis, including a Boosted Decision Tree classifier. The distribution of reconstructed events is unfolded in terms of electron and muon neutrino fluxes. The derived energy spectra are compared with previous measurements that, above 100 GeV, are limited to experiments in polar ice and, for $\\nu_\\mu$, to Super-Kamiokande.'\naddress:\n- \n- 'Universit\u00e9 de Haute Alsace, F-68200 Mulhouse, France'\n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n-" +"---\nabstract: |\n We present a general framework to study edge states for second order elliptic operators. We associate an integer valued index to some bulk materials, and we prove that for any junction between two such materials, localised states must appear at the boundary whenever the indices differ.\n\n *\u00a9\u00a02020 by the author. This paper may be reproduced, in its entirety, for non-commercial purposes.*\naddress: 'CEREMADE, University of Paris-Dauphine, PSL University, 75016 Paris, France'\nauthor:\n- David Gontier\nbibliography:\n- 'biblio.bib'\ntitle: Edge states for second order elliptic operators\n---\n\nIntroduction and statement of the main results\n==============================================\n\nThe bulk-edge correspondence states that one can associate an integer valued index $\\cI \\in \\Z$ to some bulk materials (represented here by Schr\u00f6dinger (PDE) or Hill\u2019s (ODE) operators). When the material is cut, edge states appear at the boundary whenever $\\cI \\neq 0$. In addition, it is believed that any junction between a left and a right materials having indices $\\cI_L$ and $\\cI_R$ must also have edge states near the junction whenever $\\cI_L \\neq \\cI_R$. We prove this fact in this paper.\n\nSince the original works of Hatsugai\u00a0[@hatsugai1993chern; @hatsugai1993edge], most studies on bulk-edge correspondence focused on tight-binding models ([*e.g.*]{}\u00a0[@graf2013bulk;" +"---\nabstract: 'The use of language is subject to variation over time as well as across social groups and knowledge domains, leading to differences even in the monolingual scenario. Such variation in word usage is often called lexical semantic change (LSC). The goal of LSC is to characterize and quantify language variations with respect to word meaning, to measure how distinct two language sources are (that is, people or language models). Because there is hardly any data available for such a task, most solutions involve unsupervised methods to align two embeddings and predict semantic change with respect to a distance measure. To that end, we propose a self-supervised approach to model lexical semantic change by generating training samples by introducing perturbations of word vectors in the input corpora. We show that our method can be used for the detection of semantic change with any alignment method. Furthermore, it can be used to choose the landmark words to use in alignment and can lead to substantial improvements over the existing techniques for alignment. We illustrate the utility of our techniques using experimental results on three different datasets, involving words with the same or different meanings. Our methods not only provide significant" +"---\nabstract: 'I review the parametrisation of the full set of $\\Lambda_b\\to\\Lambda^* (1520)$ form factors in the framework of Heavy Quark Expansion, including next-to-leading-order $\\mathcal{O}(\\alpha_s)$ and, for the first time, next-to-leading-power $\\mathcal{O}(1/m_b)$ corrections. The unknown hadronic parameters are obtained by performing a fit to recent lattice QCD calculations. I investigate the compatibility of the Heavy Quark Expansion and the current lattice data, finding tension between these two approaches in the case of tensor and pseudo-tensor form factors, whose origin could come from an underestimation of the current lattice QCD uncertainties and higher order terms in the Heavy Quark Expansion.'\nauthor:\n- 'Marzia Bordone[^1]'\nbibliography:\n- 'references.bib'\ntitle: '**Heavy quark expansion of $\\Lambda_b\\to\\Lambda^*(1520)$ form factors beyond leading order**'\n---\n\nIntroduction\n============\n\nThe flavour changing neutral current (FCNC)-mediated $b\\to s\\ell^+\\ell^-$ transition plays an important role in the search for physics beyond the Standard Model (SM). Its potential has been extensively studied through the $B\\to K^{(*)}\\ell^+\\ell^-$ decays. Interestingly, the LHCb experiment found some discrepancies with respect to the SM predictions in a few observables: $R_{K}$ and $R_{K^{*}}$, which test universality between the muon and electron final states and the angular coefficient $P_5^\\prime$ in the $B\\to K^*\\mu^+\\mu^-$ angular distribution [@Aaij:2019wad; @Aaij:2017vbb; @Aaij:2014ora; @Aaij:2020nrf; @Aaij:2015oid;" +"---\nabstract: |\n The Injury Severity Score (ISS) is a standard aggregate indicator of the overall severity of multiple injuries to the human body. This score is calculated by summing the squares of the three highest values of the Abbreviated Injury Scale (AIS) grades across six body regions of a trauma victim. Despite its widespread usage over the past four decades, little is known in the (mostly medical) literature on the subject about the axiomatic and statistical properties of this quadratic aggregation score. To bridge this gap, the present paper studies the ISS from the perspective of recent advances in decision science. We demonstrate some statistical and axiomatic properties of the ISS as a multicrtieria aggregation procedure. Our study highlights some unintended, undesirable properties that stem from arbitrary choices in its design and that call lead to bias in its use as a patient triage criterion.\\\n **Keywords:** Multicriteria decision making, Injury severity score, Triage.\nauthor:\n- |\n Nassim Dehouche\\\n nassim.deh@mahidol.edu\\\n Mahidol University International College\\\n Salaya, 73170, Thailand.\ntitle: On Some Statistical and Axiomatic Properties of the Injury Severity Score \n---\n\nIntroduction\n============\n\nThe Injury Severity Score (ISS) is a widely-used aggregation procedure for assessing injuries to multiple body parts and" +"---\nabstract: 'We review the equation of state of QCD matter at finite densities. We discuss the construction of the equation of state with net baryon number, electric charge, and strangeness using the results of lattice QCD simulations and hadron resonance gas models. Its application to the hydrodynamic analyses of relativistic nuclear collisions suggests that the interplay of multiple conserved charges is important in the quantitative understanding of the dense nuclear matter created at lower beam energies. Several different models of the QCD equation of state are discussed for comparison.'\naddress:\n- |\n Department of Mathematical and Physical Sciences, Japan Women\u2019s University\\\n Bunkyo-ku, Tokyo 112-8681, Japan\\\n monnaia@fc.jwu.ac.jp\n- |\n Physics Department, Brookhaven National Laboratory\\\n Upton, New York 11973, USA\\\n bschenke@bnl.gov\n- |\n Department of Physics and Astronomy, Wayne State University\\\n Detroit, Michigan 48201, USA\\\n RIKEN BNL Research Center, Brookhaven National Laboratory\\\n Upton, New York 11973, USA\\\n chunshen@wayne.edu\nauthor:\n- AKIHIKO MONNAI\n- BJ\u00d6RN SCHENKE\n- CHUN SHEN\nbibliography:\n- 'neos.bib'\ntitle: QCD EQUATION OF STATE AT FINITE CHEMICAL POTENTIALS FOR RELATIVISTIC NUCLEAR COLLISIONS \n---\n\nIntroduction\n============\n\nThe collective properties of quantum chromodynamic (QCD) matter have been a topic of great interest in nuclear physics. A milestone has been the discovery" +"---\nabstract: 'Recently multimodal transformer models have gained popularity because their performance on language and vision tasks suggest they learn rich visual-linguistic representations. Focusing on zero-shot image retrieval tasks, we study three important factors which can impact the quality of learned representations: pretraining data, the attention mechanism, and loss functions. By pretraining models on six datasets, we observe that dataset noise and language similarity to our downstream task are important indicators of model performance. Through architectural analysis, we learn that models with a multimodal attention mechanism can outperform deeper models with modality-specific attention mechanisms. Finally, we show that successful contrastive losses used in the self-supervised learning literature do not yield similar performance gains when used in multimodal transformers. [^1]'\nauthor:\n- |\n Lisa Anne Hendricks \u00a0 John Mellor \u00a0 Rosalia Schneider\\\n [**Jean-Baptiste Alayrac**]{} \u00a0 [**Aida Nematzadeh**]{}\\\n DeepMind\\\ntitle: |\n Decoupling the Role of Data, Attention, and Losses\\\n in Multimodal Transformers\n---\n\n=1\n\nMultimodal Pretraining\n======================\n\nSignificant progress in pretraining of natural language processing (NLP) models has been made through both architectural innovations [[*e.g.*]{}, transformers; @vaswani2017attention] as well as a huge increase in the size of pretraining data and the model [[*e.g.*]{}, @devlin2018bert; @brown2020language]. This success in language pretraining has inspired parallel multimodal vision\u2013language" +"---\nabstract: 'In multi-object tracking, the tracker maintains in its memory the appearance and motion information for each object in the scene. This memory is utilized for finding matches between tracks and detections and is updated based on the matching result. Many approaches model each target in isolation and lack the ability to use all the targets in the scene to jointly update the memory. This can be problematic when there are similar looking objects in the scene. In this paper, we solve the problem of simultaneously considering all tracks during memory updating, with only a small spatial overhead, via a novel multi-track pooling module. We additionally propose a training strategy adapted to multi-track pooling which generates hard tracking episodes online. We show that the combination of these innovations results in a strong discriminative appearance model, enabling the use of greedy data association to achieve online tracking performance. Our experiments demonstrate real-time, state-of-the-art performance on public multi-object tracking (MOT) datasets. The code and trained models will be released at .'\nauthor:\n- |\n Chanho Kim$\\,{}^{1}$ Li Fuxin$\\,{}^{2}$ Mazen Alotaibi$\\,{}^{2}$ James M. Rehg$\\,{}^{1}$\\\n $^1$Georgia Institute of Technology $^2$Oregon State University\nbibliography:\n- 'egbib.bib'\ntitle: 'Discriminative Appearance Modeling with Multi-track Pooling for" +"---\nabstract: 'We present the analysis of [*XMM-Newton*]{} European Photon Imaging Camera (EPIC) observations of the nova shell IPHASX J210204.7$+$471015. We detect X-ray emission from the progenitor binary star with properties that resemble those of underluminous intermediate polars such as DQHer: an X-ray-emitting plasma with temperature of $T_\\mathrm{X}=(6.4\\pm3.1)\\times10^{6}$ K, a non-thermal X-ray component, and an estimated X-ray luminosity of $L_\\mathrm{X}=10^{30}$ erg\u00a0s$^{-1}$. Time series analyses unveil the presence of two periods, the dominant with a period of $2.9\\pm0.2$\u00a0hr, which might be attributed to the spin of the white dwarf, and a secondary of $4.5\\pm0.6$\u00a0hr that is in line with the orbital period of the binary system derived from optical observations. We do not detect extended X-ray emission as in other nova shells probably due to its relatively old age (130\u2013170 yr) or to its asymmetric disrupted morphology which is suggestive of explosion scenarios different to the symmetric ones assumed in available numerical simulations of nova explosions.'\nauthor:\n- |\n [J.A.Toal\u00e1$^{1}$[^1], G.Rubio$^{2,3}$, E.Santamar\u00eda$^{2,3}$, M.A.Guerrero$^4$, S.Estrada-Dorado$^{1}$, G.Ramos-Larios$^{2,3}$,]{}\\\n $^1$Instituto de Radioastronom\u00eda y Astrof\u00edsica (IRyA), UNAM Campus Morelia, Apartado postal 3-72, 58090 Morelia, Mexico\\\n $^2$CUCEI, Universidad de Guadalajara, Blvd. Marcelino Garc\u00eda Barrag\u00e1n 1421, 44430, Guadalajara, Jalisco, Mexico\\\n $^3$Instituto de Astronom\u00eda y Meteorolog\u00eda," +"---\nabstract: 'Microbiome researchers often need to model the temporal dynamics of multiple complex, nonlinear outcome trajectories simultaneously. This motivates our development of [*multivariate Sparse Functional Principal Components Analysis*]{} (mSFPCA), extending existing SFPCA methods to simultaneously characterize multiple temporal trajectories and their inter-relationships. As with existing SFPCA methods, the mSFPCA algorithm characterizes each trajectory as a smooth mean plus a weighted combination of the smooth major modes of variation about the mean, where the weights are given by the component scores for each subject. Unlike existing SFPCA methods, the mSFPCA algorithm allows estimation of multiple trajectories simultaneously, such that the component scores, which are constrained to be independent within a particular outcome for identifiability, may be arbitrarily correlated with component scores for other outcomes. A Cholesky decomposition is used to estimate the component score covariance matrix efficiently and guarantee positive semi-definiteness given these constraints. Mutual information is used to assess the strength of marginal and conditional temporal associations across outcome trajectories. Importantly, we implement mSFPCA as a Bayesian algorithm using and , enabling easy use of packages such as PSIS-LOO for model selection and graphical posterior predictive checks to assess the validity of mSFPCA models. Although we focus on application" +"---\nabstract: 'Algorithmic recommendations mediate interactions between millions of customers and products (in turn, their producers and sellers) on large e-commerce marketplaces like Amazon. In recent years, the producers and sellers have raised concerns about the fairness of black-box recommendation algorithms deployed on these marketplaces. Many complaints are centered around marketplaces biasing the algorithms to preferentially favor their own [*\u2018private label\u2019*]{} products over competitors. These concerns are exacerbated as marketplaces increasingly de-emphasize or replace [*\u2018organic\u2019 recommendations*]{} with ad-driven [*\u2018sponsored\u2019 recommendations*]{}, which include their own private labels. While these concerns have been covered in popular press and have spawned regulatory investigations, to our knowledge, there has not been any public audit of these marketplace algorithms. In this study, we bridge this gap by performing an end-to-end systematic audit of related item recommendations on Amazon. We propose a network-centric framework to quantify and compare the biases across organic and sponsored related item recommendations. Along a number of our proposed bias measures, we find that the sponsored recommendations are significantly more biased toward Amazon private label products compared to organic recommendations. While our findings are primarily interesting to producers and sellers on Amazon, our proposed bias measures are generally useful for measuring link" +"---\nabstract: 'We study the stability of topological crystalline superconductors in the symmetry class DIIIR and in two-dimensional space when perturbed by quartic contact interactions. It is known that no less than eight copies of helical pairs of Majorana edge modes can be gapped out by an appropriate interaction without spontaneously breaking any one of the protecting symmetries. Hence, the noninteracting classification $\\mathbb{Z}$ reduces to $\\mathbb{Z}^{\\,}_{8}$ when these interactions are present. It is also known that the stability when there are less than eight modes can be understood in terms of the presence of topological obstructions in the low-energy bosonic effective theories, which prevent opening of a gap. Here, we investigate the stability of the edge theories with four, two, and one edge modes, respectively. We give an analytical derivation of the topological term for the first case, because of which the edge theory remains gapless. For two edge modes, we employ bosonization methods to derive an effective bosonic action. When gapped, this bosonic theory is necessarily associated to the spontaneous symmetry breaking of either one of time-reversal or reflection symmetry whenever translation symmetry remains on the boundary. For one edge mode, stability is explicitly established in the Majorana representation" +"---\nabstract: 'Recently defect production was investigated during non-unitary dynamics due to non-Hermitian Hamiltonian. By ramping up the non-Hermitian coupling linearly in time through an exceptional point, defects are produced in much the same way as approaching a Hermitian critical point. A generalized Kibble\u2013Zurek scaling accounted for the ensuing scaling of the defect density in terms of the speed of the drive and the corresponding critical exponents. Here we extend this setting by adding the recycling term and considering the full Lindbladian time evolution of the problem with quantum jumps. We find that by linearly ramping up the environmental coupling in time, and going beyond the steady-state solution of the Liouvillian, the defect density scales linearly with the speed of the drive for all cases. This scaling is unaffected by the presence of exceptional points of the Liouvillian, which can show up in the transient states. By using a variant of the adiabatic perturbation theory, the scaling of the defect density is determined exactly from a set of *algebraic* equations.'\nauthor:\n- Bal\u00e1zs Gul\u00e1csi\n- Bal\u00e1zs D\u00f3ra\nbibliography:\n- 'refgraph.bib'\ntitle: 'Defect production due to time-dependent coupling to environment in the Lindblad equation'\n---\n\nIntroduction\n============\n\nOver the recent years," +"---\nabstract: 'We present an emulator for the two-point clustering of biased tracers in real space. We construct this emulator using neural networks calibrated with more than $400$ cosmological models in a 8-dimensional cosmological parameter space that includes massive neutrinos an dynamical dark energy. The properties of biased tracers are described via a Lagrangian perturbative bias expansion which is advected to Eulerian space using the displacement field of numerical simulations. The cosmology-dependence is captured thanks to a cosmology-rescaling algorithm. We show that our emulator is capable of describing the power spectrum of galaxy formation simulations for a sample mimicking that of a typical Emission-Line survey at $z\\sim1$ with an accuracy of $1-2\\%$ up to nonlinear scales $k\\sim0.7{ h\\,{\\rm Mpc}^{-1}}$.'\nauthor:\n- |\n Matteo Zennaro,$^{1}$[^1] Raul E. Angulo,$^{1,2}$[^2] Marcos Pellejero-Ib\u00e1\u00f1ez,$^{1}$ Jens St\u00fccker,$^{1}$ Sergio Contreras,$^{1}$ and Giovanni Aric\u00f2$^{1,3}$\\\n $^{1}$Donostia International Physics Center (DIPC), Paseo Manuel de Lardizabal, 4, 20018, Donostia-San Sebasti\u00e1n, Guipuzkoa, Spain.\\\n $^{2}$IKERBASQUE, Basque Foundation for Science, 48013, Bilbao, Spain.\\\n $^{3}$Universidad de Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza, Spain.\nbibliography:\n- 'Bibliography\\_all.bib'\ndate: 'Accepted XXX. Received YYY; in original form ZZZ'\ntitle: 'The BACCO simulation project: biased tracers in real space'\n---\n\n\\[firstpage\\]\n\ncosmology: theory \u2013 large-scale structure of Universe \u2013" +"---\nauthor:\n- Weiran Cai\n- 'Belgin San-Akca'\n- Jordan Snyder\n- Grayson Gordon\n- Zeev Maoz\n- 'Raissa M. D\u2019Souza'\ntitle: 'Quantifying the Global Support Network for Non-State Armed Groups (NAGs)'\n---\n\nIntroduction\n============\n\nMilitary confrontation is one of the key factors that have shaped human history. While instances of large-scale interstate warfare have decreased in the post-World War era, internal wars and low-intensity conflicts carried out by non-state armed groups (NAGs) against nation-states have become increasingly common [@San-Akca:2016; @Maoz:2012; @Horowitz:2014; @Phillips:2018; @LaFree:2009; @Phillips:2015; @Freilicha:2015; @Pinker:2012; @Gleditsch:2013; @Gleditsch:2016; @Byman:2001; @Kalyvas:2010]. NAGs include rebel, insurgent, guerrilla, or terrorist groups that engage in violent activities targeting the government, citizens, or institutions of nation-states. Many of these NAGs are cultivated by a complex network of supporting host states (HSs) \u2014 states external to the locus of the internal conflict between the NAG and the target government. Examples include U.S. support to the Contras in Nicaragua in the 1980s, Israeli and Iranian support for Kurdish rebels in Iraq during the 1960s and 1970s, and NATO support for the Libyan rebels in 2011 [@San-Akca:2016]. The HSs support NAGs by providing them with military and economic aid, sanctuary to leaders or members of NAGs, logistics," +"---\nabstract: '[Gauge symmetries remove unphysical states and guarantee that field theories are free from the pathologies associated with these states. In this work we find a set of general conditions that guarantee the removal of unphysical states in field theories describing interacting vector fields. These conditions are obtained through the extension of a mechanism for the emergence of gauge symmetries proposed in a previous article \\[C. Barcel\u00f3 *et al.* JHEP 10 (2016) 084\\] in order to account for non-Abelian gauge symmetries, and are the following: low-energy Lorentz invariance, emergence of massless vector fields describable by an action quadratic in those fields and their derivatives, and self-coupling to a conserved current associated with specific rigid symmetries. Using a bootstrapping procedure, we prove that these conditions are equivalent to the emergence of gauge symmetries and, therefore, guarantee that any theory satisfying them must be equivalent to a Yang-Mills theory at low energies.]{}'\nauthor:\n- Carlos Barcel\u00f3\n- 'Ra\u00fal Carballo-Rubio'\n- 'Luis J. Garay'\n- 'Gerardo Garc\u00eda-Moreno'\nbibliography:\n- 'eym.bib'\ntitle: 'Emergent gauge symmetries: Yang-Mills theory'\n---\n\nIntroduction\n============\n\nThe search for a theory of quantum gravity, i.e., a theory which combines the principles of general relativity and quantum mechanics, has been" +"---\nabstract: 'The non-Markovianity of an arbitrary open quantum system is analyzed in reference to the multi-time statistics given by its monitoring at discrete times. On the one hand, we exploit the hierarchy of inhomogeneous transfer tensors, which provides us with relevant information about the role of correlations between the system and the environment in the dynamics. The connection between the transfer-tensor hierarchy and the CP-divisibility property is then investigated, by showing to what extent quantum Markovianity can be linked to a description of the open-system dynamics by means of the composition of 1-step transfer tensors only. On the other hand, we introduce the set of stochastic transfer tensor transformations associated with local measurements on the open system at different times and conditioned on the measurement outcomes. The use of the transfer-tensor formalism accounts for different kinds of memory effects in the multi-time statistics and allows us to compare them on a similar footing with the memory effects present in non-monitored non-Markovian dynamics, as we illustrate on a spin-boson case study.'\nauthor:\n- Stefano Gherardini\n- Andrea Smirne\n- Susana Huelga\n- Filippo Caruso\ntitle: 'Transfer-tensor description of memory effects in open-system dynamics and multi-time statistics'\n---\n\nIntroduction\n============\n\nIn" +"---\nabstract: 'The means to obtain the adsorption isotherms is a fundamental open problem in competitive chromatography. A modern technique of estimating adsorption isotherms is to solve an inverse problem so that the simulated batch separation coincides with actual experimental results. However, this identification process is usually ill-posed in the sense that the small noise in the measured response can lead to a large fluctuation in the estimated quantity of adsorption isotherms. The conventional mathematical method of solving this problem is the variational regularization, which is formulated as a non-convex minimization problem with a regularized objective functional. However, in this method, the choice of regularization parameter and the design of a convergent solution algorithm are quite difficult in practice. Moreover, due to the restricted number of injection profiles in experiments, the types of measured data are extremely limited, which may lead to a biased estimation. In order to overcome these difficulties, in this paper, we develop a new inversion method \u2013 the Virtual Injection Promoting Feed-forward Neural Network (VIP-FNN). In this approach, the training data contain various types of artificial injections and synthetic noisy measurement at outlet, generated by a conventional physics model\u00a0\u2013 a time-dependent convection-diffusion system. Numerical experiments" +"---\nauthor:\n- 'Giulio\u00a0Rossolini, \u00a0Alessandro\u00a0Biondi,\u00a0,\u00a0 and\u00a0Giorgio\u00a0Buttazzo,\u00a0'\nbibliography:\n- 'IEEEabrv.bib'\n- 'main.bib'\ntitle: |\n Increasing the Confidence of Deep\\\n Neural Networks by Coverage Analysis\n---\n\nIntroduction\n============\n\nRecent developments of machine learning algorithms exhibited superhuman performance to solve specific problems, as image classification, object detection, control, and strategy games. However, most of the AI algorithms developed today have been used for non-critical applications, as face aging, speech recognition, text prediction, gaming, image restoration and colorization, etc. Due to their excellent performance, there is a great industrial interest in using deep neural networks (DNNs) and, more in general, machine learning algorithms in autonomous systems, as robots and self-driving vehicles. When moving to such safety-critical application domains, several questions arise: can we trust machine-learning algorithms as they are? Are they prone to cyber-attacks? What to do if they fail? Are outputs generated within bounded response times? To address these questions, several issues need to be addressed at different levels of the architecture, as security, safety, explainability, and predictability.\n\nThis paper focuses on security and safety, which are quite intertwined. Several works have shown that DNN models are quite sensitive to small input variations, which can cause a" +"---\nauthor:\n- |\n Daeyung Gim [^1] and Hyungbin Park [^2]\\\n \\\n \\\n \\\nbibliography:\n- 'HJB\\_paper.bib'\nnocite: '[@al2018solving; @bjork2009arbitrage; @crisostomo2014analyisis; @guasoni2015static; @mehrdoust2020calibration; @remani2013numerical; @sirignano2018dgm]'\ntitle: A deep learning algorithm for optimal investment strategies\n---\n\nIntroduction {#sec1}\n============\n\nConsider the following expected utility maximization problem: $$\\max_{(\\pi_u)_{u\\geq t}} \\frac{1}{p} \\, \\mathbb{E}\\left[(X^{\\pi}_T)^p \\, | \\, X_t=x, \\, Y_t=y\\right],$$ where $\\pi$ is a portfolio, $X^{\\pi}$ a wealth process and $Y$ a state variable with the utility function $(1/p)x^p=:U(x)$. This kind of problem is first suggested by [@merton1969lifetime], which is the most fundamental and pioneering in economics. The Merton problem has played as a key for an investor\u2019s wealth allocation in several assets under some market circumstances. Since then there have been lots of studies about Merton problem under various conditions. [@benth2003merton] studied Merton problem under the Black-Scholes setting by using the OU type stochastic volatility model. [@kuhn2010optimal] studied optimizing portfolio of Merton problem under a limit-ordered market in view of a shadow price. The research on the optimal investment based on inside information and drift parameter uncertainty was conducted by [@danilova2010optimal]. [@nutz2010opportunity] studied the utility maximization in a semimartingale market setting with the opportunity process. [@hansen2013optimal] suggested an optimal investment strategies with investors\u2019" +"---\nabstract: 'We develop a constructive procedure for arriving at the Hamilton-Jacobi framework for the so-called affine in acceleration theories by analysing the canonical constraint structure. We find two scenarios in dependence of the order of the emerging equations of motion. By properly defining generalized brackets, the non-involutive constraints that originally arose, in both scenarios, may be removed so that the resulting involutive Hamiltonian constraints ensure integrability of the theories and, at the same time, lead to the right dynamics in the reduced phase space. In particular, when we have second-order in derivatives equations of motion we are able to detect the gauge invariant sector of the theory by using a suitable approach based on the projection of the Hamiltonians onto the tangential and normal directions of the congruence of curves in the configuration space. Regarding this, we also explore the generators of canonical and gauge transformations of these theories. Further, we briefly outline how to determine the Hamilton principal function $S$ for some particular setups. We apply our findings to some representative theories: a Chern-Simons-like theory in $(2+1)$-dim, an harmonic oscillator in $2D$ and, the geodetic brane cosmology emerging in the context of extra dimensions.'\nauthor:\n- 'Alejandro Aguilar-Salas'" +"---\nabstract: 'On a complex manifold $(M,J)$, we interpret complex symplectic and pseudo-K\u00e4hler structures as symplectic forms with respect to which $J$ is, respectively, symmetric and skew-symmetric. We classify complex symplectic structures on 4-dimensional Lie algebras. We develop a method for constructing hypersymplectic structures from the above data. This allows us to obtain an example of a hypersymplectic structure on a 4-step nilmanifold.'\naddress:\n- 'Dipartimento di Scienza ed Alta Tecnologia, Universit\u00e0 degli Studi dell\u2019Insubria, Via Valleggio 11, 22100, Como, Italy'\n- 'Universidad Complutense de Madrid, Madrid, Spain'\n- 'Departamento de Matem\u00e1tica Aplicada, Universidad Polit\u00e9cnica de Madrid, C/ Jos\u00e9 Antonio Novais 10, 28040 Madrid, Spain'\nauthor:\n- Giovanni Bazzoni\n- 'Alejandro Gil-Garc\u00eda'\n- Adela Latorre\nbibliography:\n- 'bibliography.bib'\ntitle: 'Symmetric and skew-symmetric complex structures'\n---\n\nIntroduction {#sec:1}\n============\n\nIt is customary to say that K\u00e4hler geometry lies in the intersection of complex, symplectic, and Riemannian geometry. In fact, a K\u00e4hler structure on a manifold $M$ can be thought of as a pair $(J,\\omega)$, where $J$ is a complex structure and $\\omega$ is a symplectic form, such that, for vector fields $X,Y\\in{{\\mathfrak X}}(M)$, $g(X,Y)=\\omega(X,JY)$ defines a Riemannian metric on $M$. This requires both the tameness and the compatibility of $J$ with" +"---\nabstract: 'The Kitaev model on the honeycomb lattice is a paradigmatic system known to host a wealth of nontrivial topological phases and Majorana edge modes. In the static case, the Majorana edge modes are nondispersive. When the system is periodically driven in time, such edge modes can disperse and become chiral. We obtain the full phase diagram of the driven model as a function of the coupling and the driving period. We characterize the quantum criticality of the different topological phase transitions in both the static and driven model via the notions of Majorana-Wannier state correlation functions and momentum-dependent fidelity susceptibilities. We show that the system hosts cross-dimensional universality classes: although the static Kitaev model is defined on a 2D honeycomb lattice, its criticality falls into the universality class of 1D linear Dirac models. For the periodically driven Kitaev model, besides the universality class of prototype 2D linear Dirac models, an additional 1D nodal loop type of criticality exists owing to emergent time-reversal and mirror symmetries, indicating the possibility of engineering multiple universality classes by periodic driving. The manipulation of time-reversal symmetry allows the periodic driving to control the chirality of the Majorana edge states.'\nauthor:\n- Paolo Molignini" +"---\nabstract: 'The paper presents two variants of a Krylov-Simplex iterative method that combines Krylov and simplex iterations to minimize the residual $r = b-Ax$. The first method minimizes $\\|r\\|_\\infty$, i.e. maximum of the absolute residuals. The second minimizes $\\|r\\|_1$, and finds the solution with the least absolute residuals. Both methods search for an optimal solution $x_k$ in a Krylov subspace which results in a small linear programming problem. A specialized simplex algorithm solves this projected problem and finds the optimal linear combination of Krylov basis vectors to approximate the solution. The resulting simplex algorithm requires the solution of a series of small dense linear systems that only differ by rank-one updates. The $QR$ factorization of these matrices is updated each iteration. We demonstrate the effectiveness of the methods with numerical experiments.'\nauthor:\n- 'Wim Vanroose[^1]'\n- 'Jeffrey Cornelis[^2]'\nbibliography:\n- 'references.bib'\ntitle: 'Krylov-Simplex method that minimizes the residual in $\\ell_1$-norm or $\\ell_\\infty$-norm. [^3]'\n---\n\nKrylov Subspace, $\\ell_1$-norm, $\\ell_\\infty$-norm, primal simplex.\n\n65F10, 90C05\n\nIntroduction\n============\n\nGiven vectors $x,y \\in \\mathbb{R}^n$ the distance between them in the $\\ell_\\infty$-norm is $\\|x-y\\|_\\infty = \\max_{i=1}^{n} |x_i-y_i|$. It is the *chessboard* distance or Chebyshev distance. It is the largest difference along any of the coordinate" +"---\nabstract: 'What is the \u201cright\u201d topological invariant of a large point cloud X? Prior research has focused on estimating the full persistence diagram of X, a quantity that is very expensive to compute, unstable to outliers, and far from injective. We therefore propose that, in many cases, the collection of persistence diagrams of many small subsets of X is a better invariant. This invariant, which we call \u201cdistributed persistence,\u201d is *perfectly parallelizable*, more stable to outliers, and has a rich inverse theory. The map from the space of metric spaces (with the quasi-isometry distance) to the space of distributed persistence invariants (with the Hausdorff-Bottleneck distance) is globally bi-Lipschitz. This is a much stronger property than simply being injective, as it implies that the inverse image of a small neighborhood is a small neighborhood, and is to our knowledge the only result of its kind in the TDA literature. Moreover, the inverse Lipschitz constant depends on the size of the subsets taken, so that as the size of these subsets goes from small to large, the invariant interpolates between a purely geometric one and a topological one. Lastly, we note that our inverse results do not actually require considering all" +"---\nabstract: 'Multi-access coded caching schemes from cross resolvable designs (CRD) have been reported recently [@KNRarXiv]. To be able to compare coded caching schemes with different number of users and possibly with different number of caches a new metric called rate-per-user was introduced and it was shown that under this new metric the schemes from CRDs perform better than the Maddah-Ali-Niesen scheme in the large memory regime. In this paper a new class of CRDs is presented and it is shown that the multi-access coded caching schemes derived from these CRDs perform better than the Maddah-Ali-Niesen scheme in the entire memory regime. Comparison with other known multi-access coding schemes is also presented.'\nauthor:\n- \ntitle: 'Multi-access Coded Caching from a New Class of Cross Resolvable Designs'\n---\n\nINTRODUCTION\n============\n\nCoded caching is an active area of research that has gained popularity due to its ability to reduce data transmissions during the times of high network congestion by prefetching parts of demanded contents into the memories of end users. Designing schemes that are well suited to meet practical constraints like subpacketization while achieving reasonable rates is the main challenge in developing good coded caching schemes. Most of the attention has been" +"---\nabstract: 'Shelah showed that the existence of free subsets over internally approachable subalgebras follows from the failure of the PCF conjecture on intervals of regular cardinals. We show that a stronger property called the Approachable Bounded Subset Property can be forced from the assumption of a cardinal $\\lambda$ for which the set of Mitchell orders $\\{ o(\\mu) \\mid \\mu < \\lambda\\}$ is unbounded in $\\lambda$. Furthermore, we study the related notion of continuous tree-like scales, and show that such scales must exist on all products in canonical inner models. We use this result, together with a covering-type argument, to show that the large cardinal hypothesis from the forcing part is optimal.'\nauthor:\n- 'Dominik Adolf and Omer Ben-Neria'\nbibliography:\n- 'bibli.bib'\ntitle: 'Approachable Free Subsets and Fine Structure Derived Scales[^1]'\n---\n\nIntroduction\n============\n\nThe study of set theoretic algebras has been central in many areas, with many applications to compactness principles, cardinal arithmetic, and combinatorial set theory.\n\nAn algebra on a set $X$ is a tuple ${\\mathfrak{A}}= {\\langle}X,f_n{\\rangle}_{n<\\omega}$ where $f_n: X^{k_n} \\rightarrow X$ is a function. A sub-algebra is a subset $M \\subseteq X$ such that $f_n(x_0,\\ldots,x_{k_n - 1}) \\in M$ for all $(x_0,\\ldots,x_{k_n - 1}) \\in M^{k_n}$ and" +"---\nabstract: 'Several resource allocation problems involve multiple types of resources, with a different agency being responsible for \u201clocally\u201d allocating the resources of each type, while a central planner wishes to provide a guarantee on the properties of the final allocation given agents\u2019 preferences. We study the relationship between properties of the local mechanisms, each responsible for assigning all of the resources of a designated type, and the properties of a [*sequential mechanism*]{} which is composed of these local mechanisms, one for each type, applied sequentially, under [*lexicographic preferences*]{}, a well studied model of preferences over multiple types of resources in artificial intelligence and economics. We show that when preferences are $O$-legal, meaning that agents share a common importance order on the types, sequential mechanisms satisfy the desirable properties of anonymity, neutrality, non-bossiness, or Pareto-optimality if and only if every local mechanism also satisfies the same property, and they are applied sequentially according to the order $O$. Our main results are that under $O$-legal lexicographic preferences, every mechanism satisfying strategyproofness and a combination of these properties must be a sequential composition of local mechanisms that are also strategyproof, and satisfy the same combinations of properties.'\nauthor:\n- Sujoy Sikdar\n-" +"---\nabstract: 'This paper proposes an agent-based simulation of a presidential election, inspired by the French 2017 presidential election. The simulation is based on data extracted from polls, media coverage, and Twitter. The main contribution is to consider the impact of scandals and media bashing on the result of the election. In particular, it is shown that scandals can lead to higher abstention at the election, as voters have no relevant candidate left to vote for. The simulation is implemented in Unity 3D and is available to play online. **Keywords:** agent-based simulation, computational social choice, voting models'\nauthor:\n- Yassine Bouachrine and Carole Adam\ntitle: 'Modelling the Impact of Scandals: the case of the 2017 French Presidential Election'\n---\n\nIntroduction\n============\n\nDuring the 2017 French presidential election, the media had a very impactful role in the shift of the opinion away from the election\u2019s favorite Fran\u00e7ois Fillon. The seriousness of the accusations against the candidate led to Fillon plummeting in the polls. We will try to model the impact of both conventional and social media through scandal diffusion, in order to better understand the dynamics underlying the voting process.\n\nThere are a variety of existing models for the voting process." +"---\nbibliography:\n- 'others/bibliography.bib'\n---\n\n=1\n\n[Department of Physical Chemistry]{}\\\n\n[**Design of Light-Matter Interactions**]{}\\\n\\\n\n[**I\u00f1igo Arrazola**]{}\\\n\n[Ph.D. Thesis]{}\\\n\\\n\nDepartment of Physical Chemistry University of the Basque Country (UPV/EHU) Postal Box 644, 48080 Bilbao, Spain\n\nThis document was generated with the 2020 LaTeX\u00a0distribution.The plots and figures of this thesis were generated with MATLAB and Apple\u2019s Keynote. The cover painting was done by [Ander Etxaniz](https://www.anderetxaniz.com/).\n\nThis work was funded by the Basque Government grant PRE-2015-1-0394\n\n![image](figures/Figures_0/CreativeCommons-by-sa.png){height=\"20pt\"}2016-2020 I\u00f1igo Arrazola. This work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License. To view a copy of this license, visit\n\n[Department of Physical Chemistry]{}\\\n\n[**Design of Light-Matter Interactions**]{}\\\n\\\n\n[*[ Neure etxekueri\\\n(to my family) ]{}*]{}\n\n**Ruper Ordorika**\\\n*Bihotz begiekin, Hurrengo goizean (Metak CD), 2001*\n\n[ ]{}\n\nAbstract {#abstract .unnumbered}\n========\n\nQuantum mechanics, is at the heart of many of the technological and scientific milestones of the last century such as the laser, the integrated circuit, or the magnetic resonance imaging scanner. However, only few decades have passed since we have the possibility to coherently manipulate the quantum states encoded in physical registers of specific quantum platforms. Understanding the light-matter interaction mechanisms that govern the dynamics of these systems is crucial" +"---\nabstract: 'Most existing Grammatical Error Correction (GEC) methods based on sequence-to-sequence mainly focus on how to generate more pseudo data to obtain better performance. Few work addresses few-shot GEC domain adaptation. In this paper, we treat different GEC domains as different GEC tasks and propose to extend meta-learning to few-shot GEC domain adaptation without using any pseudo data. We exploit a set of data-rich source domains to learn the initialization of model parameters that facilitates fast adaptation on new resource-poor target domains. We adapt GEC model to the first language (L1) of the second language learner. To evaluate the proposed method, we use nine L1s as source domains and five L1s as target domains. Experiment results on the L1 GEC domain adaptation dataset demonstrate that the proposed approach outperforms the multi-task transfer learning baseline by 0.50 $F_{0.5}$ score on average and enables us to effectively adapt to a new L1 domain with only 200 parallel sentences.'\nauthor:\n- |\n Shengsheng Zhang^1,2^, Yaping Huang^1^, Yun Chen^3^, Liner Yang^2^,\\\n **Chencheng Wang^4^, Erhong Yang^2^**\\\n ^1^[Beijing Jiaotong University, Beijing, China]{}\\\n ^2^[Beijing Language and Culture University, Beijing, China]{}\\\n ^3^[Shanghai University of Finance and Economics, Shanghai, China]{}\\\n ^4^[Beijing University of Technology, Beijing, China]{}\\\nbibliography:\n-" +"---\nabstract: 'This work develops a class of relaxations in between the big-M and convex hull formulations of disjunctions, drawing advantages from both. The proposed \u201c$P$-split\u201d formulations split convex additively separable constraints into $P$ partitions and form the convex hull of the partitioned disjuncts. Parameter $P$ represents the trade-off of model size vs.\u00a0relaxation strength. We examine the novel formulations and prove that, under certain assumptions, the relaxations form a hierarchy starting from a big-M equivalent and converging to the convex hull. We computationally compare the proposed formulations to big-M and convex hull formulations on a test set including: K-means clustering, P\\_ball problems, and ReLU neural networks. The computational results show that the intermediate $P$-split formulations can form strong outer approximations of the convex hull with fewer variables and constraints than the extended convex hull formulations, giving significant computational advantages over both the big-M and convex hull.'\nauthor:\n- 'Jan Kronqvist[^1]'\n- Ruth Misener\n- Calvin Tsay\nbibliography:\n- 'Ref.bib'\ntitle: 'Between Steps: Intermediate Relaxations between big-M and Convex Hull Formulations'\n---\n\nIntroduction\n============\n\nThere are well-known trade-offs between the big-M and convex hull relaxations of disjunctions in terms of problem size and relaxation tightness. Convex hull formulations [@balas1998disjunctive; @ben2001lectures;" +"---\nabstract: 'There are numerous examples of studied real-world systems that can be described as dynamical systems characterized by individual phases and coupled in a network like structure. Within the framework of oscillatory models, much attention has been devoted to the Kuramoto model, which considers a collection of oscillators interacting through a sinus function of the phase differences. In this paper, we draw on an extension of the Kuramoto model, called the Kuramoto-Sakaguchi model, which adds a phase lag parameter to each node. We construct a general formalism that allows to compute the set of lag parameters that may lead to any phase configuration within a linear approximation. In particular, we devote special attention to the cases of full synchronization and symmetric configurations. We show that the set of natural frequencies, phase lag parameters and phases at the steady state is coupled by an equation and a continuous spectra of solutions is feasible. In order to quantify the system\u2019s strain to achieve that particular configuration, we define a cost function and compute the optimal set of parameters that minimizes it. Despite considering a linear approximation of the model, we show that the obtained tuned parameters for the case of full" +"---\nabstract: 'The COVID-19 outbreak has posed an unprecedented challenge to humanity and science. On the one side, public and private incentives have been put in place to promptly allocate resources toward research areas strictly related to the COVID-19 emergency. But on the flip side, research in many fields not directly related to the pandemic has lagged behind. In this paper, we assess the impact of COVID-19 on world scientific production in the life sciences. We investigate how the usage of medical subject headings (MeSH) has changed following the outbreak. We estimate through a difference-in-differences approach the impact of COVID-19 on scientific production through PubMed. We find that COVID-related research topics have risen to prominence, displaced clinical publications, diverted funds away from research areas not directly related to COVID-19 and that the number of publications on clinical trials in unrelated fields has contracted. Our results call for urgent targeted policy interventions to reactivate biomedical research in areas that have been neglected by the COVID-19 emergency.'\naddress: |\n ${}^1$Piazza S. Francesco, 19, Lucca, 55100, Italy\\\n ${}^2$Chair of Systems Design, ETH Zurich, Weinbergstrasse 58, 8092 Zurich, Switzerland\\\n ${}^*$ corresponding author m.riccaboni@imtlucca.it \nauthor:\n- 'Massimo Riccaboni${}^{1,*}$, Luca Verginer${}^2$'\nbibliography:\n- 'references.bib'\ntitle: 'The" +"---\nabstract: 'Graph matching, also known as network alignment, refers to finding a bijection between the vertex sets of two given graphs so as to maximally align their edges. This fundamental computational problem arises frequently in multiple fields such as computer vision and biology. Recently, there has been a plethora of work studying efficient algorithms for graph matching under probabilistic models. In this work, we propose a new algorithm for graph matching: Our algorithm associates each vertex with a signature vector using a multistage procedure and then matches a pair of vertices from the two graphs if their signature vectors are close to each other. We show that, for two Erd\u0151s\u2013R\u00e9nyi graphs with edge correlation $1-\\alpha$, our algorithm recovers the underlying matching exactly with high probability when $\\alpha \\le 1 / (\\log \\log n)^C$, where $n$ is the number of vertices in each graph and $C$ denotes a positive universal constant. This improves the condition $\\alpha \\le 1 / (\\log n)^C$ achieved in previous work.'\nbibliography:\n- 'matching.bib'\ntitle: Random Graph Matching with Improved Noise Robustness\n---\n\nGraph matching, network alignment, correlated Erd\u0151s\u2013R\u00e9nyi graphs, permutations\n\nIntroduction {#sec:intro}\n============\n\nThe problem of *graph matching* or *network alignment* consists in finding a" +"---\nabstract: 'It has been known that even though two elemental metals, $X$ and $Y$, are immiscible, they can form alloys on surfaces of other metal $Z$. In order to understand such surface alloying of immiscible metals, we study the energetic stability of binary alloys, $XZ$ and $YZ$, in several structures with various coordination numbers (CNs). By analyzing the formation energy modified to enhance the subtle energy difference between metastable structures, we find that $XZ$ and $YZ$ with B2-type structure (CN$=$8) become energetically stable when the $X$ and $Y$ metals form an alloy on the $Z$ metal surface. This is consistent with the experimental results for Pb-Sn alloys on metal surfaces such as Rh(111) and Ru(0001). Some suitable metal substrates are also predicted to form Pb-Sn alloys.'\nauthor:\n- Shota Ono\n- Junji Yuhara\n- Jun Onoe\ntitle: Simple prediction of immiscible metal alloying based on metastability analysis\n---\n\nIntroduction\n============\n\nCharacterizing the structure of alloys is an important issue in materials science. In general, alloys can be classified into two groups: ordered alloys having regular lattices and disordered alloys (or solid solutions). On the other hand, some metals are immiscible with each other in the bulk. Therefore, many attempts" +"---\nabstract: 'The origin(s) of the ubiquity of probability distribution functions (PDF) with power law tails is still a matter of fascination and investigation in many scientific fields from linguistic, social, economic, computer sciences to essentially all natural sciences. In parallel, self-excited dynamics is a prevalent characteristic of many systems, from the physics of shot noise and intermittent processes, to seismicity, financial and social systems. Motivated by activation processes of the Arrhenius form, we bring the two threads together by introducing a general class of nonlinear self-excited point processes with fast-accelerating intensities as a function of \u201ctension\u201d. Solving the corresponding master equations, we find that a wide class of such nonlinear Hawkes processes have the PDF of their intensities described by a power law on the condition that (i)\u00a0the intensity is a fast-accelerating function of tension, (ii) the distribution of marks is two-sided with non-positive mean, and (iii) it has fast-decaying tails. In particular, Zipf\u2019s scaling is obtained in the limit where the average mark is vanishing. This unearths a novel mechanism for power laws including Zipf\u2019s law, providing a new understanding of their ubiquity.'\nauthor:\n- Kiyoshi Kanazawa\n- Didier Sornette\ntitle: 'Ubiquitous power law scaling in nonlinear" +"---\nabstract: |\n We introduce a [*diffuse interface box method* ]{}(DIBM) for the numerical approximation on complex geometries of elliptic problems with Dirichlet boundary conditions. We derive a priori $H^1$ and $L^2$ error estimates highlighting the r\u00f4le of the mesh discretization parameter and of the diffuse interface width. Finally, we present a numerical result assessing the theoretical findings.\\\n [**Keywords**]{}: box method, diffuse interface, complex geometries\nauthor:\n- 'G. Negrini$^a$, N. Parolini$^a$ and M. Verani$^a$'\nbibliography:\n- 'biblio.bib'\nnocite: '[@*]'\ntitle: A diffuse interface box method for elliptic problems\n---\n\n[ $^a$ MOX, Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano, Italy ]{}\n\nIntroduction {#sec:intro}\n============\n\nThe finite volume method (FVM) is a popular numerical strategy for solving partial differential equations modelling real life problems. One crucial and attractive property of FVM is that, by construction, many physical conservation laws possessed in a given application are naturally preserved. Besides, similar to the finite element method, the FVM can be used to deal with domains with complex geometries. In this respect, one crucial issue is the construction of the computational grid. To face this problem, one can basically resort to two different types of approaches. In the" +"---\nauthor:\n- 'Filippo Fedi^,^, Oleg Domanov^^, Paola Ayala^^ Thomas Pichler^^.'\ntitle: 'Synthesis of nitrogen doped single wall carbon nanotubes with caffeine.'\n---\n\nIntroduction\n============\n\nA precise control of the structure and properties is essential for gaining on the exceptional features of carbon nanotubes (CNTs) for real-life applications [@harris2009; @baughman2002; @de2013]. CNTs in their pristine form are materials with negligible chemical reactivity and with low surface energy, insufficient to satisfy diverse applications [@hirsch2002; @zhao2013]. A smart idea to regulate their properties is using different functionalization methods [@ayala2010b; @sun2002; @maiti2014]. In order to achieve high control of the material properties, different approaches have been proposed by different groups [@deng2016]. A fascinating type of doping CNTs is by substitutionally, introducing heteroatoms into the graphitic lattice, e.g. nitrogen\u00a0[@terrones2002n; @terrones2007; @Ayala2010]. A lot of work has been done with multiwalled N-doped CNTs over the past two decades and important applications have been found\u00a0[@gong2009]. Synthesizing single-walled (SW) tubes has imposed more challenges. While some groups have achieved this by post growth treatments of pristine nanocarbons\u00a0[@esconjauregui2015; @van2013; @shrestha2017], doping has also been obtained in situ, i.e. directly growing SWCNTs with incorporated heteroatoms from a specific precursor\u00a0[@ayala2007c; @ayala2007tailoring; @elias2010]. Thanks to the additional" +"---\nauthor:\n- Ghada Sokar\n- Decebal Constantin Mocanu\n- Mykola Pechenizkiy\nbibliography:\n- 'sample.bib'\ntitle: 'Self-Attention Meta-Learner for Continual Learning'\n---\n\nIntroduction\n============\n\nLifelong learning aims to build machines that mimic human learning. The main characteristics of human learning are (1) humans never learn in isolation, (2) they build on the top of the learned knowledge in the past instead of learning from scratch, (3) and acquiring new knowledge does not lead to forgetting the past knowledge. These capabilities are crucial for autonomous agents interacting in the real world [@parisi2019continual; @lesort2020continual]. For instance, systems like chatbots, recommendation systems, and autonomous driving interact with a dynamic and open environment and operate on non-stationary data. These systems are required to quickly adapt to new situations with the help of previous knowledge, acquire new experiences, and retain previously learned experiences. Deep neural networks (DNNs) have achieved outstanding performance in different areas such as visual recognition, natural language processing, and speech recognition [@zoph2018learning; @chen2017deeplab; @kenton2019bert; @lin2017feature; @guo2016deep; @liu2017survey]. However, DNNs are very effective in domain-specific tasks (closed environments). Meanwhile, the performance degrades when the model interacts with non-stationary data, a phenomenon known as catastrophic forgetting [@mccloskey1989catastrophic]. Continual learning (CL) is a research area" +"---\nabstract: 'We use a dispersion representation based on unitarity and analyticity to study the low energy $\\gamma^* N\\rightarrow \\pi N$ process in the $S_{11}$ channel. Final state interactions among the $\\pi N$ system are critical to this analysis. The left-hand part of the partial wave amplitude is imported from $\\mathcal{O}(p^2)$ chiral perturbation theory result. On the right-hand part, the final state interaction is calculated through Omn\u00e8s formula in $S$ wave. It is found that a good numerical fit can be achieved with only one subtraction parameter, and the eletroproduction experimental data of multipole amplitudes $E_{0+},\\ S_{0+}$ in the energy region below $\\Delta(1232)$ are well described when the photon virtuality $Q^2 \\leq 0.1 \\mathrm{GeV}^2$.'\nauthor:\n- 'Xiong-Hui Cao$^1$'\n- 'Yao Ma$^1$\u00a0[^1]'\n- 'Han-Qing Zheng$^{1,2}$\u00a0[^2]'\nbibliography:\n- 'ephotoN.bib'\ntitle: '**Dispersive Analysis of Low Energy $\\gamma^* N\\rightarrow\\pi N$ Process**'\n---\n\nIntroduction\n============\n\nThe electromagnetic interactions of nucleon have long been recognized as an important source of information for understanding strong interaction physics\u00a0[@Chew:1957tf; @Adler:1968tw; @Amaldi:1979vh; @Drechsel:1992pn; @Pascalutsa:2006up; @Aznauryan:2011qj; @Ronchen:2014cna]. The investigation of pion photoproduction started in the 1950s with the seminal work of Chew *et al.* (CGLN) [@Chew:1957tf], where the formalism for pion photoproduction on a nucleon target was developed," +"---\nabstract: 'Rapid non-verbal communication of task-based stimuli is a challenge in human-machine teaming, particularly in closed-loop interactions such as driving. To achieve this, we must understand the representations of information for both the human and machine, and determine a basis for bridging these representations. Techniques of explainable artificial intelligence (XAI) such as layer-wise relevance propagation (LRP) provide visual heatmap explanations for high-dimensional machine learning techniques such as deep neural networks. On the side of human cognition, visual attention is driven by the bottom-up and top-down processing of sensory input related to the current task. Since both XAI and human cognition should focus on task-related stimuli, there may be overlaps between their representations of visual attention, potentially providing a means of nonverbal communication between the human and machine. In this work, we examine the correlations between LRP heatmap explanations of a neural network trained to predict driving behavior and eye gaze heatmaps of human drivers. The analysis is used to determine the feasibility of using such a technique for enhancing driving performance. We find that LRP heatmaps show increasing levels of similarity with eye gaze according to the task specificity of the neural network. We then propose how these findings" +"---\nabstract: 'In this work, a robust and efficient text-to-speech (TTS) synthesis system named Triple M is proposed for large-scale online application. The key components of Triple M are: 1) A sequence-to-sequence model adopts a novel multi-guidance attention to transfer complementary advantages from guiding attention mechanisms to the basic attention mechanism without in-domain performance loss and online service modification. Compared with single attention mechanism, multi-guidance attention not only brings better naturalness to long sentence synthesis, but also reduces the word error rate by 26.8%. 2) A new efficient multi-band multi-time vocoder framework, which reduces the computational complexity from 2.8 to 1.0 GFLOP and speeds up LPCNet by 2.75x on a single CPU.'\naddress: ' Tencent, Beijing, China'\nbibliography:\n- 'template.bib'\ntitle: 'Triple M: A Practical Text-to-speech Synthesis System With Multi-guidance Attention And Multi-band Multi-time LPCNet'\n---\n\n**Index Terms**: Speech synthesis, sequence-to-sequence model, attention, transfer learning, vocoder, LPCNet\n\nIntroduction\n============\n\nIn the past few years, speech synthesis has attracted a lot of attention due to advances in deep learning. Sequence-to-sequence neural network [@sutskever2014sequence] with attention mechanism is one of the most popular text-to-feature models [@wang2017tacotron; @shen2018natural]. Attention mechanism is applied to align the input and output sequences. Therefore, the training is" +"---\nabstract: 'The Art and Science Interaction Lab (\u201cASIL\u201d) is a unique, highly flexible and modular \u201cinteraction science\u201d research facility to effectively bring, analyse and test experiences and interactions in mixed virtual/augmented contexts as well as to conduct research on next-gen immersive technologies. It brings together the expertise and creativity of engineers, performers, designers and scientists creating solutions and experiences shaping the lives of people. The lab is equipped with state-of-the-art visual, auditory and user-tracking equipment, fully synchronized and connected to a central backend. This synchronization allows for highly accurate multi-sensor measurements and analysis.'\nauthor:\n- Niels Van Kets^^\n- Bart Moens^^\n- Klaas Bombeke^^\n- Wouter Durnez^^\n- 'Pieter-Jan Maes'\n- Glenn Van Wallendael^^\n- Lieven De Marez\n- Marc Leman\n- Peter Lambert^^\nbibliography:\n- 'main.bib'\nsubtitle: A highly flexible and modular interaction science research facility\ntitle: Art and Science Interaction Lab\n---\n\nIntroduction\n============\n\nThe Art and Science Interaction Lab (ASIL) team supports innovation in different key domains. Within these domains, the team focuses on interaction research in virtualized environments, unraveling complex user interactions and experiences in order to design and create novel applications and interfaces. The application domains span from smart home appliances, health, safety, smart" +"---\nabstract: 'This paper estimates the effects of non-pharmaceutical interventions \u2013 mainly, the lockdown \u2013 on the COVID-19 mortality rate for the case of Italy, the first Western country to impose a national shelter-in-place order. We use a new estimator, the Augmented Synthetic Control Method (ASCM), that overcomes some limits of the standard Synthetic Control Method (SCM). The results are twofold. From a methodological point of view, the ASCM outperforms the SCM in that the latter cannot select a valid donor set, assigning all the weights to only one country (Spain) while placing zero weights to all the remaining. From an empirical point of view, we find strong evidence of the effectiveness of non-pharmaceutical interventions in avoiding losses of human lives in Italy: conservative estimates indicate that for each human life actually lost, in the absence of lockdown there would have been on average other 1.15, the policy saved in total 20,400 human lives.'\nauthor:\n- |\n Roy Cerqueti$^{a,b}$, Raffaella Coppier$^{c}$, Alessandro Girardi$^{d}$[^1], Marco Ventura$^{e}$[^2]\\\n [$^a$ Department of Social and Economic Sciences \u2013 Sapienza University of Rome, Italy]{}\\\n [$^b$ School of Business \u2013 London South Bank University, UK]{}\\\n [Email: roy.cerqueti@uniroma1.it]{}\\\n [$^c$ Department of Law and Economics \u2013 University of Macerata]{}\\" +"---\nabstract: 'We present a new math-physics modeling approach, called [*canonical quantization with numerical mode-decomposition*]{}, for capturing the physics of how incoming photons interact with finite-sized dispersive media, which is not describable by the previous Fano-diagonalization methods. The main procedure is to (1) study a system where electromagnetic (EM) fields are coupled to non-uniformly-distributed Lorentz oscillators in Hamiltonian mechanics, (2) derive a generalized Hermitian eigenvalue problem for conjugate pairs in coordinate space, (3) apply computational electromagnetics methods to find a countably-finite set of time-harmonic eigenmodes that diagonalizes the Hamiltonian, and (4) perform the subsequent canonical quantization with mode-decomposition. Moreover, we provide several numerical simulations that capture the physics of full quantum effects, impossible by classical Maxwell\u2019s equations, such as non-local dispersion cancellation of an entangled photon pair and Hong-Ou-Mandel (HOM) effect in a dispersive beam splitter.'\nauthor:\n- 'Dong-Yeop Na'\n- Jie Zhu\n- 'Weng C. Chew'\nbibliography:\n- 'mybibpra.bib'\ntitle: 'Diagonalization of Hamiltonian for finite-sized dispersive media: Canonical quantization with numerical mode-decomposition (CQ-NMD)'\n---\n\nIntroduction\n============\n\nMain contribution\n-----------------\n\nWe present a new math-physics modeling approach, [*canonical quantization with numerical mode-decomposition*]{} (CQ-NMD), suited for studying how incoming (entangled) photons interact with finite-sized dispersive media (see Fig. \\[fig:schm\\_LO\\]), which are" +"---\nabstract: 'New spectrometric data on V Pup are combined with satellite photometry (HIPPARCOS and recent TESS) to allow a revision of the absolute parameters with increased precision. We find: $M_1$ = 14.0$\\pm$0.5, $M_2$ = 7.3$\\pm$0.3 (M$_\\odot$); $R_{1}$ = 5.48$\\pm$0.18, $R_2$ = 4.59$\\pm$0.15 (R$_\\odot$); $T_{1}$ 26000$\\pm 1000$, $T_2$ 24000 $\\pm$1000 (K), age 5 $\\pm$1 (Myr), photometric distance 320 $\\pm$10 (pc). The TESS photometry reveals low-amplitude ($\\sim$0.002 mag) variations of the $\\beta$ Cep kind, consistent with the deduced evolutionary condition and age of the optical primary. This fact provides independent support to our understanding of the system as in a process of Case A type interactive evolution that can be compared with $\\mu^1$ Sco. The $\\sim$10 M$_{\\odot}$ amount of matter shed by the over-luminous present secondary must have been mostly ejected from the system rather than transferred, thus taking angular momentum out of the orbit and keeping the pair in relative close proximity. New times of minima for V Pup have been studied and the results compared with previous analyses. The implied variation of period is consistent with the Case A evolutionary model, though we offer only a tentative sketch of the original arrangement of this massive system. We are not" +"---\nabstract: 'Advancements in distributed ledger technologies are driving the rise of blockchain-based social media platforms such as *Steemit*, where users interact with each other in similar ways as conventional social networks. These platforms are autonomously managed by users using decentralized consensus protocols in a cryptocurrency ecosystem. The deep integration of social networks and blockchains in these platforms provides potential for numerous cross-domain research studies that are of interest to both the research communities. However, it is challenging to process and analyze large volumes of raw *Steemit* data as it requires specialized skills in both software engineering and blockchain systems and involves substantial efforts in extracting and filtering various types of operations. To tackle this challenge, we collect over 38 million blocks generated in *Steemit* during a 45 month time period from 2016/03 to 2019/11 and extract ten key types of operations performed by the users. The results generate *SteemOps*, a new dataset that organizes more than 900 million operations from *Steemit* into three sub-datasets namely (i) social-network operation dataset (SOD), (ii) witness-election operation dataset (WOD) and (iii) value-transfer operation dataset (VOD). We describe the dataset schema and its usage in detail and outline possible future research studies using *SteemOps*." +"---\nabstract: |\n In [@schultz2017existence] Schultz generalized the work of Rajala and Sturm [@rajalasturm], proving that a weak non-branching condition holds in the more general setting of very strict CD spaces. Anyway, similar to what happens for the strong CD condition, the very strict CD condition seems not to be stable with respect to the measured Gromov Hausdorff convergence (cf. [@MM-Example]).\\\n In this article I prove a stability result for the very strict CD condition, assuming some metric requirements on the converging sequence and on the limit space. The proof relies on the notions of *consistent geodesic flow* and *consistent plan selection*, which allow to treat separately the static and the dynamic part of a Wasserstein geodesic. As an application, I prove that the metric measure space ${\\mathbb{R}}^N$ equipped with a crystalline norm and with the Lebesgue measure satisfies the very strict ${\\mathsf{CD}}(0,\\infty)$ condition.\nauthor:\n- Mattia Magnabosco\nbibliography:\n- 'verystrictCD.bib'\ntitle: '**A Metric Stability Result for the Very Strict CD Condition**'\n---\n\nIn their pivotal works Lott, Villani [@lottvillani] and Sturm [@sturm2006; @sturm2006ii] introduced a weak notion of curvature dimension bounds, which strongly relies on the theory of Optimal Transport. They noticed that, in a Riemannian manifold, a uniform" +"---\nabstract: 'We experimentally demonstrate a proof-of-principle implementation of an almost ideal memristor - a two-terminal circuit element whose resistance is approximately proportional to the integral of the input signal over time. The demonstrated device is based on a thin-film ferromagnet/antiferromagnet bilayer, where magnetic frustration results in viscous magnetization dynamics enabling memristive functionality, while the external magnetic field plays the role of the driving input. The demonstrated memristor concept is amenable to downscaling and can be adapted for electronic driving, making it attractive for applications in neuromorphic circuits.'\nauthor:\n- Sergei Ivanov$^1$\n- Sergei Urazhdin$^1$\nbibliography:\n- 'mybib.bib'\ntitle: Nearly ideal memristive functionality based on viscous magnetization dynamics\n---\n\n[*Introduction.*]{} Memristor - a two-terminal electronic device whose resistance is ideally proportional to the integral of the input signal, such as current or magnetic field - is one of the most promising candidates for the hardware implementation of synapses in artificial neural networks\u00a0[@handbook; @nanoscale; @ielmini]. According to the original definition\u00a0[@1083337; @1454361], an ideal memristor can be described by the equations $$\\label{eq:1}\n\\frac{dR}{dt}=aI(t), V(t)=R(t)I(t), R(t=0)=R_0,$$ where $R$ is the resistance, $I(t)$ is the input signal, $V(t)$ is the output signal, $R_0$ and $a$ are constants. The input signal $I(t)$ is" +"---\nabstract: 'The linear coefficient in a partially linear model with confounding variables can be estimated using double machine learning (DML). However, this DML estimator has a two-stage least squares (TSLS) interpretation and may produce overly wide confidence intervals. To address this issue, we propose a regularization and selection scheme, **, which leads to narrower confidence intervals. It selects either the TSLS DML estimator or a regularization-only estimator depending on whose estimated variance is smaller. The regularization-only estimator is tailored to have a low mean squared error. The \u00a0estimator is fully data driven. The \u00a0estimator converges at the parametric rate, is asymptotically Gaussian distributed, and asymptotically equivalent to the TSLS DML estimator, but \u00a0 exhibits substantially better finite sample properties. The \u00a0estimator uses the idea of k-class estimators, and we show how DML and k-class estimation can be combined to estimate the linear coefficient in a partially linear endogenous model. Empirical examples demonstrate our methodological and theoretical developments. Software code for our \u00a0method is available in the -package `dmlalg`.'\nauthor:\n- |\n Corinne Emmenegger and Peter B\u00fchlmann\\\n Seminar for Statistics, ETH Z\u00fcrich\nbibliography:\n- 'references.bib'\ntitle: Regularizing Double Machine Learning in Partially Linear Endogenous Models\n---\n\n**Keywords:** Double machine learning, endogenous" +"---\nabstract: 'What type of delegation contract should be offered when facing a risk of the magnitude of the pandemic we are currently experiencing and how does the likelihood of an exogenous early termination of the relationship modify the terms of a full-commitment contract? We study these questions by considering a dynamic principal-agent model that naturally extends the classical Holmstr\u00f6m-Milgrom setting to include a risk of default whose origin is independent of the inherent agency problem. We obtain an explicit characterization of the optimal wage along with the optimal action provided by the agent. The optimal contract is linear by offering both a fixed share of the output which is similar to the standard shutdown-free Holmstr\u00f6m-Milgrom model and a linear prevention mechanism that is proportional to the random lifetime of the contract. We then tweak the model to add a possibility for risk mitigation through investment and study its optimality.'\nauthor:\n- 'Jessica Martin[^1]'\n- 'St\u00e9phane Villeneuve[^2]'\ntitle: A Class of Explicit optimal contracts in the face of shutdown\n---\n\nPrincipal-Agent problems, default risk, Hamilton-Jacobi Bellman equations.\n\nIntroduction\n============\n\nWithout seeking to oppose public health and economic growth, there is no doubt that the management of the Covid crisis had" +"---\nabstract: 'Time-series is ubiquitous across applications, such as transportation, finance and healthcare. Time-series is often influenced by external factors, especially in the form of asynchronous events, making forecasting difficult. However, existing models are mainly designated for either synchronous time-series or asynchronous event sequence, and can hardly provide a synthetic way to capture the relation between them. We propose Variational Synergetic Multi-Horizon Network (VSMHN), a novel deep conditional generative model. To learn complex correlations across heterogeneous sequences, a tailored encoder is devised to combine the advances in deep point processes models and variational recurrent neural networks. In addition, an aligned time coding and an auxiliary transition scheme are carefully devised for batched training on unaligned sequences. Our model can be trained effectively using stochastic variational inference and generates probabilistic predictions with Monte-Carlo simulation. Furthermore, our model produces accurate, sharp and more realistic probabilistic forecasts. We also show that modeling asynchronous event sequences is crucial for multi-horizon time-series forecasting.'\nauthor:\n- |\n Longyuan Li^1,2^[^1], Jihai Zhang^3^, Junchi Yan^1,3^[^2], Yaohui Jin^1,2\\ $\\dagger$^,\\\n Yunhao Zhang^3^, Yanjie Duan^4^, Guangjian Tian^4^\\\nbibliography:\n- 'main.bib'\ntitle: |\n Synergetic Learning of Heterogeneous Temporal Sequences\\\n for Multi-Horizon Probabilistic Forecasting\n---\n\n=1\n\nIntroduction\n============\n\nTemporal data streams are ubiquitous" +"---\nabstract: 'There are two kinds of bisimulation, namely [*crisp*]{} and [*fuzzy*]{}, between fuzzy structures such as fuzzy automata, fuzzy labeled transition systems, fuzzy Kripke models and fuzzy interpretations in description logics. Fuzzy bisimulations between fuzzy automata over a complete residuated lattice have been introduced by [\u0106]{}iri[\u0107]{} [*et al*]{}.\u00a0in 2012. Logical characterizations of fuzzy bisimulations between fuzzy Kripke models (respectively, fuzzy interpretations in description logics) over the residuated lattice $[0,1]$ with the G\u00f6del t-norm have been provided by Fan in\u00a02015 (respectively, Nguyen [*et al*]{}.\u00a0in 2020). There was the lack of logical characterizations of fuzzy bisimulations between fuzzy graph-based structures over a general residuated lattice, as well as over the residuated lattice $[0,1]$ with the [\u0141]{}ukasiewicz or product t-norm. In this article, we provide and prove logical characterizations of fuzzy bisimulations in fuzzy modal logics over residuated lattices. The considered logics are the fuzzy propositional dynamic logic and its fragments. Our logical characterizations concern invariance of formulas under fuzzy bisimulations and the Hennessy-Milner property of fuzzy bisimulations. They can be reformulated for other fuzzy structures such as fuzzy labeled transition systems and fuzzy interpretations in description logics.'\naddress:\n- ' Institute of Informatics, University of Warsaw, Banacha 2," +"---\nabstract: 'This paper considers an information bottleneck problem with the objective of obtaining a most informative representation of a hidden feature subject to a R\u00e9nyi entropy complexity constraint. The optimal bottleneck trade-off between relevance (measured via Shannon\u2019s mutual information) and R\u00e9nyi entropy cost is defined and an iterative algorithm for finding approximate solutions is provided. We also derive an operational characterization for the optimal trade-off by demonstrating that the optimal R\u00e9nyi entropy-relevance trade-off is achievable by a simple time-sharing scalar coding scheme and that no coding scheme can provide better performance. Two examples where the optimal Shannon entropy-relevance trade-off can be exactly determined are further given.'\nauthor:\n- |\n \\\n [^1]\nbibliography:\n- '../literatureDB.bib'\ntitle: |\n An Information Bottleneck Problem\\\n with R\u00e9nyi\u2019s Entropy\n---\n\nInformation bottleneck, entropy-constrained optimization, R\u00e9nyi entropy, coding theorem, time-sharing.\n\nIntroduction\n============\n\nIn the past decade, the optimization of information measures such as entropy, cross-entropy, and mutual information has been widely and successfully adopted in machine learning algorithms [@amjad2019; @strouse2019; @goldfeld2020; @zaidi2020] and transmission systems [@zeitler2012; @winkelbauer2013; @kurkoski2014; @meidlinger2019; @nguyen2020; @stark2020]. In particular, numerous results are related to the so-called information bottleneck (IB) method [@tishby1999] whose objective is to extract from observed data the maximal relevant" +"---\nabstract: 'We apply the method of Lyapunov-Schmidt reduction to study large area-constrained Willmore surfaces in Riemannian $3$-manifolds asymptotic to Schwarzschild. In particular, we prove that the end of such a manifold is foliated by distinguished area-constrained Willmore spheres. The leaves are the unique area-constrained Willmore spheres with large area, non-negative Hawking mass, and distance to the center of the manifold at least a small multiple of the area radius. Unlike previous related work, we only require that the scalar curvature satisfies mild asymptotic conditions. We also give explicit examples to show that these conditions on the scalar curvature are necessary.'\naddress:\n- ' University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Vienna, Austria'\n- ' University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Vienna, Austria'\nauthor:\n- Michael Eichmair\n- Thomas Koerber\ntitle: 'Large area-constrained Willmore surfaces in asymptotically Schwarzschild $3$-manifolds'\n---\n\nIntroduction\n============\n\nLet $(M, g)$ be an asymptotically flat Riemannian $3$-manifold with non-negative scalar curvature. Such manifolds arise as maximal initial data sets for the Einstein field equations and thus play an important role in general relativity.\\\nLet $\\Sigma \\subset M$ be a sphere with unit normal $\\nu$, mean curvature vector $- H \\,\\nu$, area measure $\\mathrm{d}\\mu$, and area $|\\Sigma|$. The" +"---\nauthor:\n- Timothy\u00a0Park\n- 'Franz\u00a0J.\u00a0Kiraly'\n- 'Stephen\u00a0J.\u00a0Bourne'\nbibliography:\n- 'ref.bib'\n- 'SeasonalityPaper.bib'\ntitle: Periodic seismicity detection without declustering\n---\n\nIntroduction\n============\n\nTesting for periodicity in an earthquake catalogue is a common and important procedure in state-of-art seismological data analysis [@hernandez1999time] - for example, in the study of tidal/solar periodicities [@heaton1975tidal; @tanaka2002evidence; @cochran2004earth], hydrospheric periodicities [@Ader2013; @Johnson2017; @Craig2017; @JOHNSON2020], or blast detection [@rydelek1994estimating]. The earthquake periodicity testing problem is an instance of the general data scientific problem of testing for periodicity in an abstract series of events. In this manuscript, we make two main contributions:\n\n- Presenting what is, to our knowledge, the first seasonality test that can cope with earthquake clustering, which is also a formal hypothesis test with provable guarantees rather than a complete heuristic,\n\n- Validating its practical use by application on a selection of commonly known earthquake catalogues where seasonality is a question of interest.\n\nTo our knowledge, all state-of-art testing procedures with formal guarantees are subject to the implicit mathematical assumption of no aftershocks. Aftershocks are a phenomenon which is empirically well-validated, considered of practical importance, and scientifically well-studied since more than a century\u00a0[@utsu1995centenary]; in addition, the existence and" +"---\nabstract: 'Broad bandwidth and stable microresonator frequency combs are critical for accurate and precise optical frequency measurements in a compact and deployable format. Typically, broad bandwidths (e.g., octave spans) are achieved by tailoring the microresonator\u2019s geometric dispersion. However, geometric dispersion engineering alone may be insufficient for sustaining bandwidths well beyond an octave. Here, through spectral translation induced by the nonlinear mixing between the soliton and a secondary pump, we greatly expand the bandwidth of the Kerr soliton microcomb far beyond the anomalous geometric dispersion region on both sides of the spectrum. We show that such nonlinear mixing can be summarized through the concept of synthetic dispersion, highlighting the frequency matching of the nonlinear process. Through detailed numerical simulations, we show that the synthetic dispersion model captures the system\u2019s key physical behavior, in which the second pump enables the non-degenerate four-wave mixing process of Bragg scattering, which spectrally translates the soliton and produces new dispersive waves on both sides of the spectrum, all while preserving low-noise properties across the full comb bandwidth. We experimentally demonstrate these concepts by pumping a silicon nitride microring resonator at 1063\u00a0nm and 1557\u00a0nm to enable the spectral translation of a single soliton microcomb" +"---\nabstract: 'This paper demonstrates a refined approach to solving dynamic optimization problems for underactuated marine surface vessels. To this end the differential flatness of a mathematical model assuming full actuation is exploited to derive an efficient representation of a finite dimensional nonlinear programming problem, which in turn is constrained to apply to the underactuated case. It is illustrated how the properties of the flat output can be employed for the generation of an initial guess to be used in the optimization algorithm in the presence of static and dynamic obstacles. As an example energy optimal point to point trajectory planning for a nonlinear 3 degrees of freedom dynamic model of an underactuated surface vessel is undertaken. Input constraints, both in rate and magnitude as well as state constraints due to convex and non-convex obstacles in the area of operation are considered and simulation results for a challenging scenario are reported. Furthermore, an extension to a trajectory tracking controller using model predictive control is made where the benefits of the flatness based direct method allow to introduce nonuniform sample times that help to realize long prediction horizons while maintaining short term accuracy and real time capability. This is also verified" +"---\nabstract: 'We analyse a wealth of optical spectroscopic and photometric observations of the bright ($V=11.9$) cataclysmic variable . The [*Gaia*]{} DR2 parallax gives a distance $d=334(8)$\u00a0pc to the source, making the object one of the intrinsically brightest nova-like variables seen under a low orbital inclination angle. Time-resolved spectroscopic observations revealed the orbital period of $P_{\\rm{orb}}=3\\fh8028(24)$. Its spectroscopic characteristics resemble RWSex and similar nova-like variables. We disentangled the H$\\alpha$ emission line into two components, and show that one component forms on the irradiated face of the secondary star. We suggest that the other one originates at a disc outflow area adjacent to the L$_3$ point.'\nauthor:\n- |\n M.\u00a0S. Hern\u00e1ndez $^{1}$[^1], G. Tovmassian$^{2}$, S. Zharikov$^{2,3}$, B.\u00a0T. G\u00e4nsicke$^{4,5}$, D. Steeghs$^{6,7}$, A. Aungwerojwit$^{8}$, P. Rodr[\u00ed]{}guez-Gil$^{9,10}$.\\\n \\\n $^{1}$Instituto de F\u00edsica y Astronom\u00eda, Facultad de Ciencias, Universidad de Valpara\u00edso, Av. Gran Breta\u00f1a 1111 Valpara\u00edso, Chile\\\n $^{2}$ Instituto de Astronom\u00eda, Universidad Nacional Aut\u00f3noma de M\u00e9xico, Ensenada, Baja California, C.P. 22830, Mexico\\\n $^{3}$Al-Farabi Kazakh National University, Al-Farabi Ave., 71, 050040, Almaty, Kazakhstan\\\n $^{4}$University of Warwick, Department of Physics, Gibbet Hill Road, Coventry, CV4 7AL, United Kingdom.\\\n $^{5}$Centre for Exoplanets and Habitability, University of Warwick, Coventry CV4 7AL, UK.\\\n $^{6}$Department of Physics, Astronomy and" +"---\nabstract: 'In $2018$, West Nile Virus (WNV) was detected for the first time in Germany. Since the first detection, 36 human cases and 175 cases in horses and birds are detected. The transmission cycle of West Nile Virus includes birds and mosquitoes and \u2013 as dead-end hosts \u2013 people and horses. Spatial dissemination of the disease is caused by the movements of birds and mosquitoes. While the activity and movement of mosquitoes are depending mainly on temperature, in the birds there is a complex movement pattern caused by local birds and long range dispersal birds. To this end, we have developed a metapopulation network model framework to delineate the potential spatial distribution and spread of WNV across Germany as well as to evaluate the risk throughout our proposed network model. Our model facilitates the interconnection amongst the vector, local birds and long range dispersal birds contact networks. We have assumed different distance dispersal kernels models for the vector and avian populations with the intention to include short and long range dispersal. The model includes spatial variation of mosquito abundance and the movements to resemble the reality.'\naddress:\n- |\n Friedrich-Loeffler-Institut\\\n Institute of Epidemiology\\\n S\u00fcdufer 10, 17493 Greifswald, Germany\n-" +"---\nabstract: |\n Recently, there has been a growing need in analyzing data on manifolds owing to their important role in diverse fields of science and engineering. In the literature of manifold-valued data analysis up till now, however, only a few works have been carried out concerning the robustness of estimation against noises, outliers, and other sources of perturbations. In this regard, we introduce a novel extrinsic framework for analyzing manifold valued data in a robust manner. First, by extending the notion of the geometric median, we propose a new robust location parameter on manifolds, so-called the extrinsic median. A robust extrinsic regression method is also developed by incorporating the conditional extrinsic median into the classical local polynomial regression method. We present the Weiszfeld\u2019s algorithm for implementing the proposed methods. The promising performance of our approach against existing methods is illustrated through simulation studies.\n\n [**Key words:**]{} Robust statistics, Extrinsic median, Nonparametric regression, Riemannian manifolds\nauthor:\n- |\n Hwiyoung Lee\\\n `hwiyoung.lee@stat.fsu.edu`\nbibliography:\n- 'Robust\\_Extrinsic.bib'\ndate: |\n Department of Statistics, Florida State University\\\n January 28, 2021\ntitle: Robust Extrinsic Regression Analysis for Manifold Valued Data\n---\n\nIntroduction\n============\n\nOver the past few decades, analyzing data taking vales in non-Euclidean spaces, mostly nonlinear" +"---\nabstract: 'Early wildfire detection is of paramount importance to avoid as much damage as possible to the environment, properties, and lives. Deep Learning (DL) models that can leverage both visible and infrared information have the potential to display state-of-the-art performance, with lower false-positive rates than existing techniques. However, most DL-based image fusion methods have not been evaluated in the domain of fire imagery. Additionally, to the best of our knowledge, no publicly available dataset contains visible-infrared fused fire images. There is a growing interest in DL-based image fusion techniques due to their reduced complexity. Due to the latter, we select three state-of-the-art, DL-based image fusion techniques and evaluate them for the specific task of fire image fusion. We compare the performance of these methods on selected metrics. Finally, we also present an extension to one of the said methods, that we called *FIRe-GAN*, that improves the generation of artificial infrared images and fused ones on selected metrics.'\nauthor:\n- 'J. F. Cipri\u00e1n-S\u00e1nchez'\n- 'G. Ochoa-Ruiz'\n- 'M. Gonzalez-Mendoza'\n- 'L. Rossi'\nbibliography:\n- 'references.bib'\ndate: 'Received: date / Accepted: date'\ntitle: 'FIRe-GAN: A novel Deep Learning-based infrared-visible fusion method for wildfire imagery [^1] '\n---\n\n[example.eps]{} gsav newpath 20" +"---\nabstract: 'Counterfactual explanations, which deal with \u201cwhy not?\u201d scenarios, can provide insightful explanations to an AI agent\u2019s behavior [@TimMillerSocialSciencePaper2019]. In this work, we focus on generating counterfactual explanations for deep reinforcement learning (RL) agents which operate in visual input environments like Atari. We introduce [*counterfactual*]{} [*state*]{} [*explanations*]{}, a novel example-based approach to counterfactual explanations based on generative deep learning. Specifically, a counterfactual state illustrates what minimal change is needed to an Atari game image such that the agent chooses a different action. We also evaluate the effectiveness of counterfactual states on human participants who are not machine learning experts. Our first user study investigates if humans can discern if the counterfactual state explanations are produced by the actual game or produced by a generative deep learning approach. Our second user study investigates if counterfactual state explanations can help non-expert participants identify a flawed agent; we compare against a baseline approach based on a nearest neighbor explanation which uses images from the actual game. Our results indicate that counterfactual state explanations have sufficient fidelity to the actual game images to enable non-experts to more effectively identify a flawed RL agent compared to the nearest neighbor baseline and to having no" +"---\nabstract: |\n The estimation of functions with varying degrees of smoothness is a challenging problem in the nonparametric function estimation. In this paper, we propose the LABS (L\u00e9vy Adaptive B-Spline regression) model, an extension of the LARK models, for the estimation of functions with varying degrees of smoothness. LABS model is a LARK with B-spline bases as generating kernels. The B-spline basis consists of piecewise $k$ degree polynomials with $k-1$ continuous derivatives and can express systematically functions with varying degrees of smoothness. By changing the orders of the B-spline basis, LABS can systematically adapt the smoothness of functions, i.e., jump discontinuities, sharp peaks, etc. Results of simulation studies and real data examples support that this model catches not only smooth areas but also jumps and sharp peaks of functions. The proposed model also has the best performance in almost all examples. Finally, we provide theoretical results that the mean function for the LABS model belongs to the certain Besov spaces based on the orders of the B-spline basis and that the prior of the model has the full support on the Besov spaces.\n\n Key words: Nonparametric Function Estimation; L\u00e9vy Random Measure; Besov Space; Reversible Jump Markov Chain Monte Carlo." +"---\nauthor:\n- 'L. M. Flor-Torres, R. Coziol, K.-P. Schr\u00f6der, D. Jack, and J. H. M. M. Schmitt,'\ntitle: 'Connecting the formation of stars and planets. II: coupling the angular momentum of stars with the angular momentum of planets'\n---\n\nIntroduction {#sec:intro}\n============\n\nThe discovery of gas giant planets rotating very close to theirs stars (hot Jupiter, or HJs) has forced us to reconsider our model for the formation of planets around low mass stars by including in an ad hoc way large scale migration. Since this did not happen in the solar system, it brings the natural question of understanding under what conditions large scale migration could be triggered in a proto-planetary disk (PPD). By stating such question, we adopt the simplest view that there is only one universal process for the formation of planets, following the collapse of dense regions in a molecuar cloud [@McKee2007; @Draine2011; @Champion2019]. This reduces the problem to a more specific one which is: how do we include migration in a well developed model like the core collapse scenario (the standard model), which explains in details how the solar system formed [@Safronov1969; @Wetherill1989; @Wurm1996; @Poppe1997; @Klahr2006; @Hilke2011; @dePater2015; @Raymond2020]?\n\nIn the literature, two migration" +"---\nabstract: 'We answer a basic question in Nevanlinna theory that Ahlfors currents associated to the same entire curve may be [*nonunique*]{}. Indeed, we will construct one exotic entire curve $f: \\mathbb{C}\\rightarrow X$ which produces infinitely many cohomologically different Ahlfors currents. Moreover, concerning Siu\u2019s decomposition, for an arbitrary $k\\in \\mathbb{Z}_{+}\\cup \\{\\infty\\}$, some of the obtained Ahlfors currents have singular parts supported on $k$ irreducible curves. In addition, they can have [*nonzero*]{} diffuse parts as well. Lastly, we provide new examples of diffuse Ahlfors currents on the product of two elliptic curves and on $\\mathbb{P}^2({\\mathbb{C}})$, and we show cohomologically elaborate Ahlfors currents on blow-ups of $X$.'\naddress:\n- 'Hua Loo-Keng center for Mathematical Sciences, Academy of Mathematics and System Science, Chinese Academy of Sciences, Beijing 100190, China & Department of Mathematics, University of Education, Hue University, 34 Le Loi St., Hue City, Vietnam'\n- 'Academy of Mathematics and System Science & Hua Loo-Keng Key Laboratory of Mathematics, Chinese Academy of Sciences, Beijing 100190, China'\nauthor:\n- Dinh Tuan Huynh\n- 'Song-Yan Xie'\nbibliography:\n- 'article.bib'\ntitle: On Ahlfors currents\n---\n\n**Introduction**\n================\n\nLet $X$ be a compact complex manifold equipped with an area form $\\omega$. Let $f:\\mathbb{C}\\longrightarrow X$ be a nonconstant" +"---\nabstract: 'Knowledge tracing allows Intelligent Tutoring Systems to infer which topics or skills a student has mastered, thus adjusting curriculum accordingly. Deep Learning based models like Deep Knowledge Tracing (DKT) and Dynamic Key-Value Memory Network (DKVMN) have achieved significant improvements compared with models like Bayesian Knowledge Tracing (BKT) and Performance Factors Analysis (PFA). However, these deep learning based models are not as interpretable as other models because the decision-making process learned by deep neural networks is not wholly understood by the research community. In previous work, we critically examined the DKT model, visualizing and analyzing the behaviors of DKT in high dimensional space. In this work, we extend our original analyses with a much larger dataset and add discussions about the memory states of the DKVMN model. We discover that Deep Knowledge Tracing has some critical pitfalls: 1) instead of tracking each skill through time, DKT is more likely to learn an \u2018ability\u2019 model; 2) the recurrent nature of DKT reinforces irrelevant information that it uses during the tracking task; 3) an untrained recurrent network can achieve similar results to a trained DKT model, supporting a conclusion that recurrence relations are not properly learned and, instead, improvements are simply" +"---\nabstract: 'We report a systematic measurement of cumulants, $C_{n}$, for net-proton, proton and antiproton multiplicity distributions, and correlation functions, $\\kappa_n$, for proton and antiproton multiplicity distributions up to the fourth order in Au+Au collisions at $\\sqrt{s_{\\mathrm {NN}}}$ = 7.7, 11.5, 14.5, 19.6, 27, 39, 54.4, 62.4 and 200 GeV. The $C_{n}$ and $\\kappa_n$ are presented as a function of collision energy, centrality and kinematic acceptance in rapidity, $y$, and transverse momentum, $p_{T}$. The data were taken during the first phase of the Beam Energy Scan (BES) program (2010 \u2013 2017) at the BNL Relativistic Heavy Ion Collider (RHIC) facility. The measurements are carried out at midrapidity ($|y| <$ 0.5) and transverse momentum 0.4 $<$ $p_{\\rm T}$ $<$ 2.0 GeV/$c$, using the STAR detector at RHIC. We observe a non-monotonic energy dependence ([[$\\sqrt{s_{{\\rm NN}}}$]{}]{} = 7.7 \u2013 62.4 GeV) of the net-proton $C_{4}$/$C_{2}$ with the significance of 3.1$\\sigma$ for the 0-5% central Au+Au collisions. This is consistent with the expectations of critical fluctuations in a QCD-inspired model. Thermal and transport model calculations show a monotonic variation with [$\\sqrt{s_{{\\rm NN}}}$]{}. For the multiparticle correlation functions, we observe significant negative values for a two-particle correlation function, $\\kappa_2$, of protons and antiprotons, which" +"---\nabstract: 'The Hadamard Extension of a matrix is the matrix consisting of all Hadamard products of subsets of its rows. This construction arises in the context of identifying a mixture of product distributions on binary random variables: full column rank of such extensions is a necessary ingredient of identification algorithms. We provide several results concerning when a Hadamard Extension has full column rank.'\nauthor:\n- 'Spencer L. Gordon[^1]'\n- 'Leonard J. Schulman[^2]'\nbibliography:\n- 'refs.bib'\ntitle: Hadamard Extensions and the Identification of Mixtures of Product Distributions\n---\n\nIntroduction {#sec: intro}\n============\n\nThe Hadamard product for row vectors $u=(u_1,\\ldots,u_k)$, $v=(v_1,\\ldots,v_k)$ is the mapping $\\odot: {\\mathbb R}^k \\times {\\mathbb R}^k \\to {\\mathbb R}^k$ given by $$\\begin{aligned}\n u \\odot v & := (u_1v_1,\\ldots,u_kv_k) \\end{aligned}$$ The identity for this product is the all-ones vector ${\\mathbbm{1}}$. We associate with vector $v$ the linear operator $v_\\odot = \\operatorname{diag}(v)$, a $k \\times k$ diagonal matrix, so that $$u \\cdot v_\\odot =v \\odot u .$$\n\nThroughout this paper ${{\\mathbf{m}}}$ is a real matrix with row set $[n]:=\\{1,\\ldots,n\\}$ and column set $[k]$; write ${{\\mathbf{m}}}_{i}$ for a row and ${{\\mathbf{m}}}^{j}$ for a column.\n\nAs a matter of notation, for a matrix $Q$ and nonempty sets $R$ of rows and" +"---\nabstract: 'The present article is devoted to developing the Legendre wavelet operational matrix method (LWOMM) to find the numerical solution of two-dimensional hyperbolic telegraph equations (HTE) with appropriate initial time boundary space conditions. The Legendre wavelets series with unknown coefficients have been used for approximating the solution in both of the spatial and temporal variables. The basic idea for discretizing two-dimensional HTE is based on differentiation and integration of operational matrices. By implementing LWOMM on HTE, HTE is transformed into algebraic generalized Sylvester equation. Numerical experiments are provided to illustrate the accuracy and efficiency of the presented numerical scheme. Comparisons of numerical results associated with the proposed method with some of the existing numerical methods confirm that the method is easy, accurate and fast experimentally. Moreover, we have investigated the convergence analysis of multidimensional Legendre wavelet approximation. Finally we have compared our result with research article of Mittal and Bhatia (see [@mittal_2014]).'\naddress: 'Department of Mathematics and Statistics, Indian Institute of Technology kanpur, India'\nauthor:\n- Vijay Kumar Patel\n- Dhirendra Bahuguna\ntitle: 'Numerical and approximate solutions for two-dimensional hyperbolic telegraph equation via wavelet matrices'\n---\n\nTelegraph equation ,Legendre wavelets ,Operational matrices ,Kronecker multiplications ,BICGSTAB method.\n\nApplications {#applications .unnumbered}" +"---\nabstract: 'The impact of global warming and the imperative to limit climate change have stimulated the need to develop new solutions based on renewable energy sources. One of the emerging trends in this endeavor are the Electric Vehicles (EVs), which use electricity instead of traditional fossil fuels as a power source, relying on the Vehicle-to-Grid (V2G) paradigm. The novelty of such a paradigm requires careful analysis to avoid malicious attempts. An attacker can exploit several surfaces, such as the remote connection between the Distribution Grid and Charging Supply or the authentication system between the charging Supply Equipment and the Electric Vehicles. However, V2G architecture\u2019s high cost and complexity in implementation can restrain this field\u2019s research capability. In this paper, we approach this limitation by proposing MiniV2G, an open-source emulator to simulate Electric Vehicle Charging (EVC) built on top of Mininet and RiseV2G. MiniV2G is particularly suitable for security researchers to study and test real V2G charging scenarios. MiniV2G can reproduce with high fidelity a V2G architecture to easily simulate an EV charging process. Finally, we present a MiniV2G application and show how MiniV2G can be used to study V2G communication and develop attacks and countermeasures that can be applied" +"---\nabstract: 'In this work, we propose a new approach for language identification using multi-head self-attention combined with raw waveform based 1D convolutional neural networks for Indian languages. Our approach uses an encoder, multi-head self-attention, and a statistics pooling layer. The encoder learns features directly from raw waveforms using 1D convolution kernels and an LSTM layer. The LSTM layer captures temporal information between the features extracted by the 1D convolutional layer. The multi-head self-attention layer takes outputs of the LSTM layer and applies self-attention mechanisms on these features with M different heads. This process helps the model give more weightage to the more useful features and less weightage to the less relevant features. Finally, the frame-level features are combined using a statistics pooling layer to extract the utterance-level feature vector label prediction. We conduct all our experiments on the \u00a0373 hrs of audio data for eight different Indian languages. Our experiments show that our approach outperforms the baseline model by an absolute 3.69% improvement in F1-score and achieves the best F1-score of 95.90%. Our approach also shows that using raw waveform models gets a 1.7% improvement in performance compared to the models built using handcrafted features.'\naddress: ' FreshWorks Inc.," +"---\nabstract: 'We study the problem of recovering the common $k$-sized support of a set of $n$ samples of dimension $d$, using $m$ noisy linear measurements per sample. Most prior work has focused on the case when $m$ exceeds $k$, in which case $n$ of the order $(k/m)\\log(d/k)$ is both necessary and sufficient. Thus, in this regime, only the total number of measurements across the samples matter, and there is not much benefit in getting more than $k$ measurements per sample. In the measurement-constrained regime where we have access to fewer than $k$ measurements per sample, we show an upper bound of $O((k^{2}/m^{2})\\log d)$ on the sample complexity for successful support recovery when $m\\ge 2\\log d$. Along with the lower bound from our previous work, this shows a phase transition for the sample complexity of this problem around $k/m=1$. In fact, our proposed algorithm is sample-optimal in both the regimes. It follows that, in the $m\\ll k$ regime, multiple measurements from the same sample are more valuable than measurements from different samples.'\nauthor:\n- '[Lekshmi Ramesh]{}'\n- '[Chandra R. Murthy]{}'\n- '[Himanshu Tyagi]{}'\nbibliography:\n- 'IEEEabrv.bib'\n- 'bibfile.bib'\n- 'bibJournalList.bib'\ntitle: Phase Transitions for Support Recovery from Gaussian Linear Measurements" +"---\nabstract: '[A tilted Liouville-master equation in Hilbert space is presented for Markovian open quantum systems. We demonstrate that it is the unraveling of the tilted quantum master equation. The latter is widely used in the analysis and calculations of stochastic thermodynamic quantities in quantum stochastic thermodynamics. ]{}'\nauthor:\n- Fei Liu\ntitle: 'On a tilted Liouville-master equation of open quantum systems'\n---\n\nIntroduction {#section1}\n============\n\nIn the past two decades, stochastic thermodynamics for open quantum systems has attracted considerable theoretical interest\u00a0[@Esposito2009; @Campisi2011; @Alicki2018; @Liu2018]. One of the major issues is the statistics of random thermodynamic variables such as heat, work, entropy production, and efficiency\u00a0[@Kurchan2000; @Breuer2003; @Talkner2007; @DeRoeck2007; @Talkner2008; @Crooks2008; @Garrahan2010; @Subasi2012; @Horowitz2012; @Hekking2013; @Leggio2013; @Horowitz2013; @Zinidarifmmodeheckclseci2014; @Verley2014a; @Gasparinetti2014; @Cuetara2015; @Carrega2015; @Manzano2016; @Suomela2016; @Liu2016a; @Strasberg2017; @Wang2017; @Restrepo2018; @Carollo2018; @Carollo2019; @Liu2020]. The tilted or generalized quantum master equation (TQME) is a useful approach for the study of these problems\u00a0[@Esposito2009]. For instance, the fluctuation theorems of steady-states can be demonstrated according to the symmetries implied in the maximal eigenvalue of the equation\u00a0[@Esposito2009; @Gasparinetti2014; @Cuetara2015; @Liu2020]. To study the concrete probability distributions of the random thermodynamic variables, we can numerically or analytically solve the equation to obtain characteristic functions" +"---\nabstract: '[ The evidence for benzonitrile () in the starless cloud core TMC\u20131 makes high-resolution studies of other aromatic nitriles and their ring-chain derivatives especially timely. One such species is phenylpropiolonitrile (3-phenyl-2-propynenitrile, ), whose spectroscopic characterization is reported here for the first time. The low resolution (0.5 [cm$^{-1}$]{}) vibrational spectrum of has been recorded at far- and mid-infrared wavelengths (50\u20133500 [cm$^{-1}$]{}) using a Fourier Transform interferometer, allowing for the assignment of band centers of 14 fundamental vibrational bands. The pure rotational spectrum of the species has been investigated using a chirped-pulse Fourier transform microwave (FTMW) spectrometer (6\u201318 GHz), a cavity enhanced FTMW instrument (6\u201320 GHz), and a millimeter-wave one (75\u2013100 GHz, 140\u2013214 GHz). Through the assignment of more than 6200 lines, accurate ground state spectroscopic constants (rotational, centrifugal distortion up to octics, and nuclear quadrupole hyperfine constants) have been derived from our measurements, with a plausible prediction of the weaker bands through calculations. Interstellar searches for this highly polar species can now be undertaken with confidence since the astronomically most interesting radio lines have either been measured or can be calculated to very high accuracy below 300GHz.]{}'\naddress:\n- 'Universit\u00e9 Paris-Saclay, CNRS, Institut des Sciences Mol\u00e9culaires d\u2019Orsay, 91405 Orsay," +"---\nabstract: 'Machine Learning seeks to identify and encode bodies of knowledge within provided datasets. However, data encodes subjective content, which determines the possible outcomes of the models trained on it. Because such subjectivity enables marginalisation of parts of society, it is termed (social) \u2018bias\u2019 and sought to be removed. In this paper, we contextualise this discourse of bias in the ML community against the subjective choices in the development process. Through a consideration of how choices in data and model development construct subjectivity, or biases that are represented in a model, we argue that addressing and mitigating biases is near-impossible. This is because both data and ML models are objects for which meaning is made in each step of the development pipeline, from data selection over annotation to model training and analysis. Accordingly, we find the prevalent discourse of bias limiting in its ability to address social marginalisation. We recommend to be conscientious of this, and to accept that de-biasing methods only correct for a fraction of biases.'\nauthor:\n- Zeerak Waseem\n- Smarika Lulz\n- Joachim Bingel\n- Isabelle Augenstein\nbibliography:\n- 'eacl2021.bib'\ntitle: 'Disembodied Machine Learning: On the Illusion of Objectivity in NLP'\n---\n\nIntroduction\n============\n\nMachine" +"---\nabstract: 'The infectivity of a virus sample is measured by the infections it causes, via a plaque or focus forming assay (\u00a0or FFU) or an endpoint dilution ([ED]{}) assay (, CCID$_{50}$, EID$_{50}$, etc., hereafter collectively [$\\text{ID}_{50}$]{}). The counting of plaques or foci at a given dilution intuitively and directly provides the concentration of infectious doses in the undiluted sample. However, it has many technical and experimental limitations. For example, it is subjective as it relies on one\u2019s judgement in distinguishing between two merged plaques and a larger one, or between small plaques and staining artifacts. In this regard, [ED]{}assays are more robust because one need only determine whether or not infection occurred. The output of the [ED]{}assay, the 50% infectious dose ([$\\text{ID}_{50}$]{}), is calculated using either the [Spearman-K[\u00e4]{}rber]{}(19081931) or [Reed-Muench]{}(1938) mathematical approximations. However, these are often miscalculated and their approximation of the [$\\text{ID}_{50}$]{}cannot be reliably related to the infectious dose. Herein, we propose that the plaque and focus forming assays be abandoned, and that the measured output of the [ED]{}assay, the [$\\text{ID}_{50}$]{}, be replaced by a more useful measure we coined *specific infections* (). We introduce a free, open-source web-application, [**midSIN**]{}, that computes the \u00a0concentration in a virus" +"---\nabstract: 'Sequential recommendation (SR) is to accurately recommend a list of items for a user based on her current accessed ones. While new-coming users continuously arrive in the real world, one crucial task is to have *inductive* SR that can produce embeddings of users and items without re-training. Given user-item interactions can be extremely sparse, another critical task is to have *transferable* SR that can transfer the knowledge derived from one domain with rich data to another domain. In this work, we aim to present the *holistic SR* that simultaneously accommodates conventional, inductive, and transferable settings. We propose a novel deep learning-based model, *Relational Temporal Attentive Graph Neural Networks* (RetaGNN), for holistic SR. The main idea of RetaGNN is three-fold. First, to have inductive and transferable capabilities, we train a *relational attentive GNN* on the local subgraph extracted from a user-item pair, in which the learnable weight matrices are on various relations among users, items, and attributes, rather than nodes or edges. Second, long-term and short-term temporal patterns of user preferences are encoded by a proposed *sequential self-attention* mechanism. Third, a *relation-aware* regularization term is devised for better training of RetaGNN. Experiments conducted on MovieLens, Instagram, and Book-Crossing datasets" +"---\nabstract: 'The basic principle of quantum mechanics\u00a0[@1982Wootters] guarantee the unconditional security of quantum key distribution (QKD)\u00a0[@BB84; @mayers1996quantum; @lo1999unconditional; @shor2000simple; @Scarani2009] at the cost of inability of amplification of quantum state. As a result, despite remarkable progress in worldwide metropolitan QKD networks\u00a0[@Gisin2002; @RevModPhys.92.025002] over the past decades, long haul fiber QKD network without trustful relay has not been achieved yet. Here, through sending-or-not-sending (SNS) protocol\u00a0[@wang2018sns], we complete a twin field QKD (TF-QKD)\u00a0[@nature18Overcoming] and distribute secure keys without any trusted repeater over a 511 km long haul fiber trunk linking two distant metropolitans. Our secure key rate is around 3 orders of magnitudes greater than what is expected if the previous QKD field test system over the same length were applied. The efficient quantum-state transmission and stable single-photon interference over such a long distance deployed fiber paves the way to large-scale fiber quantum networks.'\nauthor:\n- 'Jiu-Peng Chen'\n- Chi Zhang\n- Yang Liu\n- Cong Jiang\n- 'Wei-Jun Zhang'\n- 'Zhi-Yong Han'\n- 'Shi-Zhao Ma'\n- 'Xiao-Long Hu'\n- 'Yu-Huai Li'\n- Hui Liu\n- Fei Zhou\n- 'Hai-Feng Jiang'\n- 'Teng-Yun Chen'\n- Hao Li\n- 'Li-Xing You'\n- Zhen Wang\n- 'Xiang-Bin Wang'" +"---\nauthor:\n- Giovanni Antonio Chirilli\ntitle: 'High-energy Operator Product Expansion at sub-eikonal level'\n---\n\n[ !! @toks= @toks= ]{} [@counter>0@toks=@toks=]{}\n\nabstract\n\nThe high energy Operator Product Expansion for the product of two electromagnetic currents is extended to the sub-eikonal level in a rigorous way. I calculate the impact factors for polarized and unpolarized structure functions, define new distribution functions, and derive the evolution equations for unpolarized and polarized structure functions in the flavor singlet and non-singlet case.\n\nIntroduction\n============\n\nIn view of future collider experiments, in which hadronic matter will be probed at unprecedented kinematic regimes, there has been, in recent years, an ever growth of interest in understanding the fundamental properties of hadrons, spin and mass, from their constituents, quarks, and gluons [@Boer:2011fh; @Nocera:2014gqa; @deFlorian:2008mr; @deFlorian:2009vb; @Kovchegov:2015pbl; @Kovchegov:2017lsr; @Boussarie:2019icw; @Tarasov:2020cwl; @Hatta:2016aoc; @Hatta:2020riw; @Hatta:2020ltd].\n\nAt high energy (Regge limit) scattering amplitudes are dominated by gluon dynamics, in particular, the cross section of Deep Inelastic Scattering (DIS) processes, is dominated, in the unpolarized case, by the gluon structure function. Within the Leading Log Approximation (LLA), the resummation of log of energy through BFKL [@Kuraev:1977fs; @Balitsky:1978ic] formalism predicts a steep rise of the DIS cross-section that has been observed experimentally at" +"---\nabstract: 'Polynomial multiplication is a bottleneck in most of the public-key cryptography protocols, including Elliptic-curve cryptography and several of the post-quantum cryptography algorithms presently being studied. In this paper, we present a library of various large integer polynomial multipliers to be used in hardware cryptocores. Our library contains both digitized and non-digitized multiplier flavours for circuit designers to choose from. The library is supported by a C++ generator that automatically produces the multipliers\u2019 logic in Verilog HDL that is amenable for FPGA and ASIC designs. Moreover, for ASICs, it also generates configurable and parameterizable synthesis scripts. The features of the generator allow for a quick generation and assessment of several architectures at the same time, thus allowing a designer to easily explore the (complex) optimization search space of polynomial multiplication.'\nauthor:\n- \n- \n- \nbibliography:\n- 'multiplier.bib'\ntitle: 'An Open-source Library of Large Integer Polynomial Multipliers [^1] '\n---\n\nschoolbook multiplier, karatsuba multiplier, toom cook multiplier, digitized polynomial multiplication, Large integer polynomial multipliers\n\nIntroduction\n============\n\nPolynomial multiplication (i.e., $c(x)=a(x)\\times b(x)$) is a fundamental building block for cryptographic hardware and is often identified as the bottleneck in implementing efficient circuits. The most widely deployed public key crypto systems (e.g., RSA" +"---\nabstract: 'We present a new hybrid quantum-classical algorithm for optimizing unitary coupled-cluster (UCC) wave functions deemed the projective quantum eigensolver (PQE), amenable to near-term noisy quantum hardware. Contrary to variational quantum algorithms, PQE optimizes a trial state using residuals (projections of the Schr\u00f6dinger equation) rather than energy gradients. We show that the residuals may be evaluated by simply measuring two energy expectation values per element. We also introduce a selected variant of PQE (SPQE) that uses an adaptive ansatz built from arbitrary-order particle-hole operators, offering an alternative to gradient-based selection procedures. PQE and SPQE are tested on a set of molecular systems covering both the weak and strong correlation regimes, including hydrogen clusters with 4\u201310 atoms and the molecule. When employing a fixed ansatz, we find that PQE can converge disentangled (factorized) UCC wave functions to essentially identical energies as variational optimization while requiring fewer computational resources. A comparison of SPQE and adaptive variational quantum algorithms shows that\u2014for ans\u00e4tze containing the same number of parameters\u2014the two methods yield results of comparable accuracy. Finally, we show that SPQE performs similar to, and in some cases, better than selected configuration interaction and the density matrix renormalization group on 1\u20133 dimensional strongly" +"---\nabstract: 'The gasification of multicomponent fuel drops is relevant in various energy-related technologies. An interesting phenomenon associated with this process is the self-induced explosion of the drop, producing a multitude of smaller secondary droplets, which promotes overall fuel atomization and, consequently, improves the combustion efficiency and reduces emissions of liquid-fueled engines. Here, we study a unique explosive gasification process of a tri-component droplet consisting of water, ethanol, and oil (\u201couzo\"), by high-speed monitoring of the entire gasification event taking place in the well-controlled, levitated Leidenfrost state over a superheated plate. It is observed that the preferential evaporation of the most volatile component, ethanol, triggers nucleation of the oil microdroplets/nanodroplets in the remaining drop, which, consequently, becomes an opaque oil-in-water microemulsion. The tiny oil droplets subsequently coalesce into a large one, which, in turn, wraps around the remnant water. Because of the encapsulating oil layer, the droplet can no longer produce enough vapor for its levitation, and, thus, falls and contacts the superheated surface. The direct thermal contact leads to vapor bubble formation inside the drop and consequently drop explosion in the final stage.'\nauthor:\n- Sijia Lyu\n- Huanshu Tan\n- Yuki Wakata\n- Xianjun Yang\n- 'Chung K." +"---\nabstract: 'Boundary discontinuity and its inconsistency to the final detection metric have been the bottleneck for rotating detection regression loss design. In this paper, we propose a novel regression loss based on Gaussian Wasserstein distance as a fundamental approach to solve the problem. Specifically, the rotated bounding box is converted to a 2-D Gaussian distribution, which enables to approximate the indifferentiable rotational IoU induced loss by the Gaussian Wasserstein distance (GWD) which can be learned efficiently by gradient back-propagation. GWD can still be informative for learning even there is no overlapping between two rotating bounding boxes which is often the case for small object detection. Thanks to its three unique properties, GWD can also elegantly solve the boundary discontinuity and square-like problem regardless how the bounding box is defined. Experiments on five datasets using different detectors show the effectiveness of our approach. Codes are made public available[^1][^2].'\nauthor:\n- |\n Xue Yang$^{1,2,}$[^3], Junchi Yan$^{1,2,}$[^4], Qi Ming$^{4}$, Wentao Wang$^{1}$, Xiaopeng Zhang$^{3}$, Qi Tian$^{3}$\\\n $^{1}$Department of Computer Science and Engineering, Shanghai Jiao Tong University\\\n $^{2}$MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University\\\n $^{3}$Huawei Inc. $^{4}$School of Automation, Beijing Institute of Technology\\\n [yangxue-2019-sjtu@sjtu.edu.cn]{}\nbibliography:\n- 'egbib.bib'\ntitle: Rethinking" +"---\nauthor:\n- 'Mark Standke, Abdullah Kiwan, Annalena Lange, Dr. Silvan Berg'\n- 'EQMania UG, Rheinwerkallee 6, 53227 Bonn'\nbibliography:\n- 'nejlt.bib'\ntitle: Introduction of a novel word embedding approach based on technology labels extracted from patent data\n---\n\n1. Introduction {#introduction .unnumbered}\n===============\n\nIn the recent decades, without exception, the number of granted patents as well as the amount of patent applications grew steadily for patent authorities all over the world. Taking the US for example, the amount of granted patents grew from 2009 to 2018 by 75% reaching a total of more than 3 million patents that have been in place in 2018 [@USPatStat]. In addition, the diversity in patent-specific language is increasing, which makes researching existing intellectual property rights for products or services being developed more extensive. This is not only true for English patents but also for other industrial countries across the world. Patent attorneys or patent applicants can only keep track of hundreds of synonyms related to patent-specific vocabulary with large effort. This has a significant impact on the complexity of patent language and in particular on the quantity of applied synonyms. Needless to say that novel techniques beyond classical Boolean searches must be developed" +"---\nabstract: 'Nuclear spins of noble gases feature extremely long coherence times but are inaccessible to optical photons. Here we realize a coherent interface between light and noble-gas spins that is mediated by alkali atoms. We demonstrate the optical excitation of the noble-gas spins and observe the coherent back-action on the light in the form of high-contrast two-photon spectra. We report on a record two-photon linewidth of $5\\pm0.7$ mHz (millihertz) above room-temperature, corresponding to a one-minute coherence time. This experiment provides a demonstration of coherent bi-directional coupling between light and noble-gas spins, rendering their long-lived spin coherence accessible for manipulations in the optical domain.'\nauthor:\n- Or Katz\n- Roy Shaham\n- Ofer Firstenberg\nbibliography:\n- 'scibib.bib'\ntitle: 'Coupling light to a nuclear spin gas with a two-photon linewidth of five millihertz'\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nThe coupling of light to atomic spins is a principal tool in quantum information processing using photons [@QIP1; @QIP2; @QIP3; @QIP4] and in precision optical spectroscopy, enabling determination of atomic structure [@Arimondo; @Henderson], time and frequency standards [@J-Ye], and laboratory searches of new physics [@NP]. The performance of these applications depends on the coherence time of the spins and on the efficiency with" +"---\nabstract: 'We propose a model of the substructural logic of Bunched Implications (BI) that is suitable for reasoning about quantum states. In our model, the separating conjunction of BI describes separable quantum states. We develop a program logic where pre- and post-conditions are BI formulas describing quantum states\u2014the program logic can be seen as a counterpart of separation logic for imperative quantum programs. We exercise the logic for proving the security of quantum one-time pad and secret sharing, and we show how the program logic can be used to discover a flaw in Google Cirq\u2019s tutorial on the Variational Quantum Algorithm (VQA).'\nauthor:\n- \n- \n- \n- \n- \nbibliography:\n- 'main.bib'\ntitle: A Quantum Interpretation of Bunched Logic for Quantum Separation Logic\n---\n\nIntroduction\n============\n\nThe logic of Bunched Implications (BI) of O\u2019Hearn and Pym\u00a0[@OP99; @Pym02; @POY04] is a substructural logic that features resource-aware connectives. One such connective is $*$, known as separating conjunction: informally, an assertion $\\phi * \\psi$ holds with respect to a resource $R$ if the resource $R$ can be split into resources $R'$ and $R''$ such that $\\phi$ holds with respect to $R'$ and $\\psi$ holds with respect to $R''$. This interpretation is particularly" +"---\nabstract: 'In the same base setup as Sakharov\u2019s induced gravity, we investigate emergence of gravity in effective quantum field theories (QFT), with particular emphasis on the gauge sector in which gauge bosons acquire anomalous masses in proportion to the ultraviolet cutoff $\\Lambda_\\wp$. Drawing on the fact that $\\Lambda_\\wp^2$ corrections explicitly break the gauge and Poincare symmetries, we find that it is possible to map $\\Lambda_\\wp^2$ to spacetime curvature as a covariance relation and we find also that this map erases the anomalous gauge boson masses. The resulting framework describes gravity by the general relativity (GR) and matter by the QFT itself with $\\log\\Lambda_\\wp$ corrections (dimensional regularization). This QFT-GR concord predicts existence of new physics beyond the Standard Model such that the new physics can be a weakly-interacting or even a non-interacting sector comprising the dark matter, dark energy and possibly more. The concord has consequential implications for collider, astrophysical and cosmological phenomena.'\nauthor:\n- 'Durmu[\u015f]{} Demir'\ndate: 'Received: date / Accepted: date'\ntitle: 'Emergent Gravity as the Eraser of Anomalous Gauge Boson Masses, and QFT-GR Concord'\n---\n\nIntroduction {#sect:intro}\n============\n\nThe problem of reconciling QFTs with the GR has been under intense study for several decades. The concord between" +"---\nabstract: 'Recovery of power flow to critical infrastructures, after grid failure, is a crucial need arising in scenarios that are increasingly becoming more frequent. This article proposes a power transition and recovery strategy by proposing a mode-dependent droop control-based inverters. The control strategy of inverters achieves the following objectives 1) regulate the output active and reactive power by the droop-based inverters to a desired value while operating in on-grid mode 2) seamless transition and recovery of power flow injections into the critical loads in the network by inverters operating in off-grid mode after the main grid fails; 3) require minimal information of grid/network status and conditions for the mode transition of droop control. A framework for assessing the stability of the system and to guide the choice of parameters for controllers is developed using control-oriented modeling. A comprehensive controller hardware-in-the-loop-based real-time simulation study on a test-system based on the realistic electrical network of M-Health Fairview, University of Minnesota Medical Center, corroborates the efficacy of the proposed controller strategy.'\nauthor:\n- 'Soham\u00a0Chakraborty,\u00a0 Sourav\u00a0Patel,\u00a0 and\u00a0Murti V. Salapaka,\u00a0[^1][^2]'\ntitle: 'Recovery of Power Flow to Critical Infrastructures using Mode-dependent Droop-based Inverters'\n---\n\nDroop control, emergency power supply system, parallel" +"---\nabstract: 'Recent developments in Natural Language Processing (NLP) demonstrate that large-scale, self-supervised pre-training can be extremely beneficial for downstream tasks. These ideas have been adapted to other domains, including the analysis of the amino acid sequences of proteins. However, to date most attempts on protein sequences rely on direct masked language model style pre-training. In this work, we design a new, adversarial pre-training method for proteins, extending and specializing similar advances in NLP. We show compelling results in comparison to traditional MLM pre-training, though further development is needed to ensure the gains are worth the significant computational cost.'\nauthor:\n- |\n \\\n \\\n \\\n \\\n \\\n CSAIL, MIT\nbibliography:\n- 'references.bib'\ntitle: 'Adversarial Contrastive Pre-training for Protein Sequences'\n---\n\nProtein pre-training, adversarial methods, pre-training, contrastive estimation, transformers\n\nIntroduction\n============\n\nPre-training, particularly using a self-supervised masked language model (MLM) task over a large corpus, has recently emerged as a powerful tool to improve various prediction and generation tasks, first in natural language processing (NLP) via systems like BERT\u00a0[@devlin_bert_2019], and later in other domains, including biomedical domains such as protein sequences[^1] \u00a0[@rao_evaluating_2019; @alley_unified_2019; @conneau_unsupervised_2019; @lu2020self]. However, unlike NLP, where newer methods have brought improved performance, the state of the art" +"---\nabstract: 'To gain insight into reaction mechanism of activated processes, we introduce an exact approach for quantifying the topology of high-dimensional probability surfaces of the underlying dynamic processes. Instead of Morse indexes, we study the homology groups of a sequence of superlevel sets of the probability surface over high-dimensional configuration spaces using persistent homology. For alanine-dipeptide isomerization, a prototype of activated processes, we identify locations of probability peaks and connecting-ridges, along with measures of their global prominence. Instead of a saddle-point, the transition state ensemble (TSE) of conformations are at the most prominent probability peak after reactants/products, when proper reaction coordinates are included. Intuition-based models, even those exhibiting a double-well, fail to capture the dynamics of the activated process. Peak occurrence, prominence, and locations can be distorted upon subspace projection. While principal component analysis account for conformational variance, it inflates the complexity of the surface topology and destroy dynamic properties of the topological features. In contrast, TSE emerges naturally as the most prominent peak beyond the reactant/product basins, when projected to a subspace of minimum dimension containing the reaction coordinates. Our approach is general and can be applied to investigate the topology of high-dimensional probability surfaces of other activated" +"---\nabstract: |\n It is well-known that pythagorean triples can be represented by points of the unit circle with rational coordinates. These points form an abelian group, and we describe its structure. This structural description yields, almost immediately, an enumeration of the normalized pythagorean triples with a given hypotenuse, and also to an effective method for producing all such triples. This effective method seems to be new.\n\n This paper is intended for the general mathematical audience, including undergraduate mathematics students, and therefore it contains plenty of background material, some history and several examples and exercises.\naddress: 'Department of Mathematics, Ben Gurion University, Be\u2019er Sheva 84105, Israel'\nauthor:\n- Amnon Yekutieli\ndate: 28 January 2021\ntitle: 'Pythagorean Triples, Complex Numbers, Abelian Groups and Prime Numbers'\n---\n\nPythagorean Triples\n===================\n\nA [*pythagorean triple*]{} is a triple $(a, b, c)$ of positive integers satisfying the equation $$\\label{eqn:1}\na^2 + b^2 = c^2 .$$ The reason for the name is, of course, because of the Pythagoras Theorem, which says that the sides of a right angled triangle, with base $a$, height $b$ and hypotenuse $c$, satisfy this equation. See Figure \\[fig:100\\].\n\n![Right angled triangle, with base $a$, height $b$ and hypotenuse $c$.[]{data-label=\"fig:100\"}](drawing1.jpg)\n\nWe say" +"---\nabstract: |\n This paper is a theoretical and numerical study of the uniform growth of a repeating sinusoidal imperfection in the line of a strut on a nonlinear elastic Winkler type foundation. The imperfection is introduced by considering an initially deformed shape which is a sine function with an half wavelength. The restoring force is either a bi-linear or an exponential profile. Periodic solutions of the equilibrium problem are found using three different approaches: a semi-analytical method, an explicit solution of a Galerkin method and a direct numerical resolution. These methods are found in very good agreement and show the existence of a maximum imperfection size which leads to a limit point in the equilibrium curve of the system. The existence of this limit point is very important since it governs the appearance of localization phenomena.\n\n Using the Galerkin method, we then establish an exact formula for this maximum imperfection size and we show that it does not depend on the choice of the restoring force. We also show that this method provides a better estimate with respect to previous publications. The decrease of the maximum compressive force supported by the beam as a function of the imperfection magnitude" +"---\nabstract: 'Existing CNNs-Based RGB-D salient object detection (SOD) networks are all required to be pretrained on the ImageNet to learn the hierarchy features which helps provide a good initialization. However, the collection and annotation of large-scale datasets are time-consuming and expensive. In this paper, we utilize self-supervised representation learning (SSL) to design two pretext tasks: the cross-modal auto-encoder and the depth-contour estimation. Our pretext tasks require only a few and unlabeled RGB-D datasets to perform pretraining, which makes the network capture rich semantic contexts and reduce the gap between two modalities, thereby providing an effective initialization for the downstream task. In addition, for the inherent problem of cross-modal fusion in RGB-D SOD, we propose a consistency-difference aggregation (CDA) module that splits a single feature fusion into multi-path fusion to achieve an adequate perception of consistent and differential information. The CDA module is general and suitable for cross-modal and cross-level feature fusion. Extensive experiments on six benchmark datasets show that our self-supervised pretrained model performs favorably against most state-of-the-art methods pretrained on ImageNet. The source code will be publicly available at .'\nauthor:\n- 'Xiaoqi Zhao,^1^ Youwei Pang, ^1^ Lihe Zhang, ^1^[^1] Huchuan Lu, ^1,2^ Xiang Ruan ^3^'\n- Author" +"---\nabstract: 'We solve the entanglement-assisted (EA) classical capacity region of quantum multiple-access channels with an arbitrary number of senders. As an example, we consider the bosonic thermal-loss multiple-access channel and solve the one-shot capacity region enabled by an entanglement source composed of sender-receiver pairwise two-mode squeezed vacuum states. The EA capacity region is strictly larger than the capacity region without entanglement-assistance. With two-mode squeezed vacuum states as the source and phase modulation as the encoding, we also design practical receiver protocols to realize the entanglement advantages. Four practical receiver designs, based on optical parametric amplifiers, are given and analyzed. In the parameter region of a large noise background, the receivers can enable a simultaneous rate advantage of $82.0\\%$ for each sender. Due to teleportation and superdense coding, our results for EA classical communication can be directly extended to EA quantum communication at half of the rates. Our work provides a unique and practical network communication scenario where entanglement can be beneficial.'\nauthor:\n- Haowei Shi\n- 'Min-Hsiu Hsieh'\n- Saikat Guha\n- Zheshen Zhang\n- Quntao Zhuang\ntitle: 'Entanglement-assisted capacity regions and protocol designs for quantum multiple-access channels'\n---\n\nIntroduction\n============\n\nCommunication channels model physical media for information transmission." +"---\nabstract: 'The shortest secure path (routing) problem in communication networks has to deal with multiple attack layers e.g., man-in-the-middle, eavesdropping, packet injection, packet insertion, etc. Consider different probabilities for each such attack over an edge, probabilities that can differ across edges. Furthermore, a usage of a single shortest paths (for routing) implies possible traffic bottleneck, which should be avoided if possible, which we term [*pathneck security avoidance*]{}. Finding all Pareto\u2013optimal solutions for the multi-criteria single-source single-destination shortest secure path problem with non-negative edge lengths might yield a solution with an exponential number of paths. In the first part of this paper, we study specific settings of the multi-criteria shortest secure path problem, which are based on prioritized multi-criteria and on $k$-shortest secure paths. In the second part, we show a polynomial-time algorithm that, given an undirected graph $G$ and a pair of vertices $(s,t)$, finds prioritized multi-criteria $2$-disjoint (vertex/edge) shortest secure paths between $s$ and $t$. In the third part of the paper, we introduce the $k$-disjoint all-criteria-shortest secure paths problem, which is solved in time $O(\\min(k|E|, |E|^{3/2}))$.'\nauthor:\n- Yefim Dinitz\n- Shlomi Dolev\n- Manish Kumar\nbibliography:\n- 'mybibliography.bib'\ntitle: ' Polynomial Time $k$-Shortest Multi-Criteria Prioritized and" +"---\nabstract: 'Strongly irradiated exoplanets develop extended atmospheres that can be utilized to probe the deeper planet layers. This connection is particularly useful in the study of small exoplanets, whose bulk atmospheres are challenging to characterize directly. Here we report the 3.4-sigma detection of [[C [ii]{}]{}]{} ions during a single transit of the super-Earth [[$\\pi$ Men c]{}]{} in front of its Sun-like host star. The transit depth and Doppler velocities are consistent with the ions filling the planet\u2019s Roche lobe and moving preferentially away from the star, an indication that they are escaping the planet. We argue that [[$\\pi$ Men c]{}]{} possesses a thick atmosphere with abundant heavy volatiles ($\\gtrsim$50[%]{} by mass of atmosphere) but that needs not be carbon rich. Our reasoning relies upon cumulative evidence from the reported [[C [ii]{}]{}]{} detection, the non-detection of [[H [i]{}]{}]{} atoms in a past transit, modeling of the planet\u2019s interior and the assumption that the atmosphere, having survived the most active phases of its Sun-like host star, will survive another 0.2\u20132 Gyr. Depending on the current mass of atmosphere, [[$\\pi$ Men c]{}]{} may still transition into a bare rocky core. Our findings confirm the hypothesized compositional diversity of small exoplanets, and represent" +"---\nabstract: 'We present predictions for the extent of the dust-continuum emission of thousands of main-sequence galaxies drawn from the TNG50 simulation between $z=1-5$. To this aim, we couple the radiative transfer code `SKIRT` to the output of the TNG50 simulation and measure the dust-continuum half-light radius of the modeled galaxies, assuming a Milky Way dust type and a metallicity dependent dust-to-metal ratio. The dust-continuum half-light radius at observed-frame 850 is up to $\\sim$75 per cent larger than the stellar half-mass radius, but significantly more compact than the observed-frame 1.6 (roughly corresponding to H-band) half-light radius, particularly towards high redshifts: the compactness compared to the 1.6 emission increases with redshift. This is driven by obscuration of stellar light from the galaxy centres, which increases the apparent extent of 1.6 disk sizes relative to that at 850 . The difference in relative extents increases with redshift because the observed-frame 1.6 emission stems from ever shorter wavelength stellar emission. These results suggest that the compact dust-continuum emission observed in $z>1$ galaxies is not (necessarily) evidence of the buildup of a dense central stellar component. We also find that the dust-continuum half-light radius very closely follows the radius containing half the star formation" +"---\nabstract: 'When dealing with spreading processes on networks it can be of the utmost importance to test the reliability of data and identify potential unobserved spreading paths. In this paper we address these problems and propose methods for hidden layer identification and reconstruction. We also explore the interplay between difficulty of the task and the structure of the multilayer network describing the whole system where the spreading process occurs. Our methods stem from an exact expression for the likelihood of a cascade in the Susceptible-Infected model on an arbitrary graph. We then show that by imploring statistical properties of unimodal distributions and simple heuristics describing joint likelihood of a series of cascades one can obtain an estimate of both existence of a hidden layer and its content with success rates far exceeding those of a null model. We conduct our analyses on both synthetic and real-world networks providing evidence for the viability of the approach presented.'\nauthor:\n- '\u0141ukasz G. Gajewski'\n- Jan Cho\u0142oniewski\n- Mateusz Wilinski\nbibliography:\n- 'references.bib'\ntitle: Detecting Hidden Layers from Spreading Dynamics on Complex Networks\n---\n\nIntroduction\n============\n\nReal-world complex systems can often be described by interconnected structures known as multilayer networks [@de2013mathematical; @kivela2014multilayer;" +"---\nabstract: 'To retrieve more relevant, appropriate and useful documents given a query, finding clues about that query through the text is crucial. Recent deep learning models regard the task as a term-level matching problem, which seeks exact or similar query patterns in the document. However, we argue that they are inherently based on local interactions and do not generalise to ubiquitous, non-consecutive contextual relationships. In this work, we propose a novel relevance matching model based on graph neural networks to leverage the document-level word relationships for ad-hoc retrieval. In addition to the local interactions, we explicitly incorporate all contexts of a term through the graph-of-word text format. Matching patterns can be revealed accordingly to provide a more accurate relevance score. Our approach significantly outperforms strong baselines on two ad-hoc benchmarks. We also experimentally compare our model with BERT and show our advantages on long documents.'\nauthor:\n- '**Yufeng Zhang^1^[^1], Jinghao Zhang^1,2^, Zeyu Cui^1,2^, Shu Wu^1,2,3^[^2] and Liang Wang^1,2^**\\'\nbibliography:\n- 'ref.bib'\ntitle: 'A Graph-based Relevance Matching Model for Ad-hoc Retrieval'\n---\n\nIntroduction\n============\n\nDeep learning models have proved remarkably successful for information retrieval (IR) in recent years. The goal herein is to rank among a collection of documents the" +"---\nabstract: 'Two popular approaches to model-free continuous control tasks are SAC and TD3. At first glance these approaches seem rather different; SAC aims to solve the entropy-augmented MDP by minimising the KL-divergence between a stochastic proposal policy and a hypotheical energy-basd soft Q-function policy, whereas TD3 is derived from DPG, which uses a deterministic policy to perform policy gradient ascent along the value function. In reality, both approaches are remarkably similar, and belong to a family of approaches we call \u2018Off-Policy Continuous Generalized Policy Iteration\u2019. This illuminates their similar performance in most continuous control benchmarks, and indeed when hyperparameters are matched, their performance can be statistically indistinguishable. To further remove any difference due to implementation, we provide [[](https://github.com/fiorenza2/OffCon3)]{}(*Off*-Policy *Con*tinuous *Con*trol: *Con*solidated), a code base featuring state-of-the-art versions of both algorithms.'\nauthor:\n- |\n Philip J. Ball\\\n Department of Engineering Science\\\n University of Oxford\\\n Oxford, UK\\\n `ball@robots.ox.ac.uk`\\\n- |\n Stephen J. Roberts\\\n Department of Engineering Science\\\n University of Oxford\\\n Oxford, UK\\\n `sjrob@robots.ox.ac.uk`\\\nbibliography:\n- 'refs.bib'\ntitle: '[[](https://github.com/fiorenza2/OffCon3)]{}: What is State-of-the-Art Anyway?'\n---\n\nIntroduction\n============\n\nState-of-the-art performance in model-free continuous control reinforcement learning (RL) has been dominated by off-policy maximum-entropy/soft-policy based methods, namely Soft Actor Critic [@haarnoja2018soft; @haarnoja2018softapp]. This is evidenced" +"---\nabstract: 'Risk prediction capitalizing on emerging human genome findings holds great promise for new prediction and prevention strategies. While the large amounts of genetic data generated from high-throughput technologies offer us a unique opportunity to study a deep catalog of genetic variants for risk prediction, the high-dimensionality of genetic data and complex relationships between genetic variants and disease outcomes bring tremendous challenges to risk prediction analysis. To address these rising challenges, we propose a kernel-based neural network (KNN) method. KNN inherits features from both linear mixed models (LMM) and classical neural networks and is designed for high-dimensional risk prediction analysis. To deal with datasets with millions of variants, KNN summarizes genetic data into kernel matrices and use the kernel matrices as inputs. Based on the kernel matrices, KNN builds a single-layer feedforward neural network, which makes it feasible to consider complex relationships between genetic variants and disease outcomes. The parameter estimation in KNN is based on MINQUE and we show, that under certain conditions, the average prediction error of KNN can be smaller than that of LMM. Simulation studies also confirm the results.'\nauthor:\n- |\n Xiaoxi Shen, Xiaoran Tong and\\\n Qing Lu\nbibliography:\n- 'KNN.bib'\ntitle: '**A Kernel-Based" +"---\nabstract: 'We present an implementation of the dual foliation generalized harmonic gauge (DF-GHG) formulation within the pseudospectral code `bamps`. The formalism promises to give greater freedom in the choice of coordinates that can be used in numerical relativity. As a specific application we focus here on the treatment of black holes in spherical symmetry. Existing approaches to black hole excision in numerical relativity are susceptible to failure if the boundary fails to remain outflow. We present a method, called DF-excision, to avoid this failure. Our approach relies on carefully choosing coordinates in which the coordinate lightspeeds are under strict control. These coordinates are then combined with the DF-GHG formulation. After performing a set of validation tests in a simple setting, we study the accretion of large pulses of scalar field matter on to a spherical black hole. We compare the results of DF-excision with a naive setup. DF-excision proves reliable even when the previous approach fails.'\nauthor:\n- 'Maitraya K ${}^{1,2}$'\n- 'David ${}^{3}$'\n- 'K Rajesh ${}^{1,2}$'\n- 'Sarah Renkhoff${}^{5}$'\n- 'Hannes R ${}^{4}$'\n- 'Bernd ${}^{5}$'\nbibliography:\n- 'DFex.bib'\ntitle: 'An Implementation of DF-GHG with Application to Spherical Black Hole Excision'\n---\n\nIntroduction\n============\n\nFree-evolution formulations of" +"---\nabstract: 'We explore the extended Koopmans\u2019 theorem (EKT) within the phaseless auxiliary-field quantum Monte Carlo (AFQMC) method. The EKT allows for the direct calculation of electron addition and removal spectral functions using reduced density matrices of the $N$-particle system, and avoids the need for analytic continuation. The lowest level of EKT with AFQMC, called EKT1-AFQMC, is benchmarked using small molecules, 14-electron and 54-electron uniform electron gas supercells, and diamond at the $\\Gamma$-point. Via comparison with numerically exact results (when possible) and coupled-cluster methods, we find that EKT1-AFQMC can reproduce the qualitative features of spectral functions for Koopmans-like charge excitations with errors in peak locations of less than 0.25 eV in a finite basis. We also note the numerical difficulties that arise in the EKT1-AFQMC eigenvalue problem, especially when back-propagated quantities are very noisy. We show how a systematic higher order EKT approach can correct errors in EKT1-based theories with respect to the satellite region of the spectral function. Our work will be of use for the study of low-energy charge excitations and spectral functions in correlated molecules and solids where AFQMC can be reliably performed.'\nauthor:\n- Joonho Lee\n- 'Fionn D. Malone'\n- 'Miguel A. Morales'\n- 'David" +"---\nabstract: 'As the public seeks greater accountability and transparency from machine learning algorithms, the research literature on methods to explain algorithms and their outputs has rapidly expanded. Feature importance methods form a popular class of explanation methods. In this paper, we apply the lens of feminist epistemology to recent feature importance research. We investigate what epistemic values are implicitly embedded in feature importance methods and how or whether they are in conflict with feminist epistemology. We offer some suggestions on how to conduct research on explanations that respects feminist epistemic values, taking into account the importance of social context, the epistemic privileges of subjugated knowers, and adopting more interactional ways of knowing.'\nauthor:\n- 'Leif Hancox-Li'\n- 'I. Elizabeth Kumar'\nbibliography:\n- 'sample-base.bib'\ntitle: 'Epistemic values in feature importance methods: Lessons from feminist epistemology'\n---\n\nIntroduction\n============\n\nIn recent years, the number of new methods for measuring feature importance for machine learning (ML) models has exploded, leaving ML practitioners spoilt for choice. As black-box algorithms with inscrutable inner mechanisms are increasingly used for crucial decisions, demands for greater transparency and accountability have increased, leading to legal requirements for explanation like that in the European Union\u2019s General Data Protection Regulation" +"---\nabstract: 'With the emergence of personality computing as a new research field related to artificial intelligence and personality psychology, we have witnessed an unprecedented proliferation of personality-aware recommendation systems. Unlike conventional recommendation systems, these new systems solve traditional problems such as the cold start and data sparsity problems. This survey aims to study and systematically classify personality-aware recommendation systems. To the best of our knowledge, this survey is the first that focuses on personality-aware recommendation systems. We explore the different design choices of personality-aware recommendation systems, by comparing their personality modeling methods, as well as their recommendation techniques. Furthermore, we present the commonly used datasets and point out some of the challenges of personality-aware recommendation systems.'\nauthor:\n- 'Sahraoui Dhelim, Nyothiri Aung, Mohammed Amine Bouras, Huansheng Ning and Erik Cambria. [^1][^2] [^3]'\nbibliography:\n- 'IEEEabrv.bib'\n- 'refs.bib'\ntitle: 'A Survey on Personality-Aware Recommendation Systems'\n---\n\n[Dhelim : Bare Demo of IEEEtran.cls for IEEE Journals]{}\n\nRecommendation system, Personality computing, personality traits, Big-five model, personality-aware, social computing, collaborative filtering.\n\nIntroduction\n============\n\nComputing is the interdisciplinary study field that focuses on the integration of personality psychology theories with computing systems. It has been proven that leveraging personality theories could help to tackle" +"---\nabstract: 'Speech emotion recognition is a vital contributor to the next generation of human-computer interaction (HCI). However, current existing small-scale databases have limited the development of related research. In this paper, we present LSSED, a challenging large-scale english speech emotion dataset, which has data collected from 820 subjects to simulate real-world distribution. In addition, we release some pre-trained models based on LSSED, which can not only promote the development of speech emotion recognition, but can also be transferred to related downstream tasks such as mental health analysis where data is extremely difficult to collect. Finally, our experiments show the necessity of large-scale datasets and the effectiveness of pre-trained models. The dateset will be released on .'\naddress: |\n $^{\\star}$ School of Electronic and Information Engineering, South China University of Technology, China\\\n $^{\\dagger}$ UBTECH Robotics Corp, China\nbibliography:\n- 'refs.bib'\ntitle: 'LSSED: A Large-Scale Dataset and Benchmark for Speech Emotion Recognition'\n---\n\nspeech emotion recognition, dataset, pre-trained model, deep learning\n\nIntroduction {#sec:intro}\n============\n\nSpeech emotion recognition (SER) is a necessary part of the human-computer interaction system. Although emotion itself is very abstract, it still has some obvious intonation characteristics. Intuitively, sad voices are generally low-pitched and slow while happy voices" +"---\nabstract: 'Feller, Klug, Schirmer and Zemke showed the homology and the intersection form of a closed trisected 4-manifold are described in terms of trisection diagram. In this paper, it is confirmed that we are able to calculate those of a trisected 4-manifold with boundary in a similar way. Moreover, we describe a representative of the second Stiefel-Whitney class by the relative trisection diagram.'\nauthor:\n- HOKUTO TANIMOTO\ntitle: HOMOLOGY OF RELATIVE TRISECTION AND ITS APPLICATION\n---\n\nIntroduction\n============\n\nGay and Kirby [@GK1] introduced a trisection as a decomposition of a 4-manifold into three 4-dimensional handlebodies. They mainly dealt with closed manifolds. Since Castro, Gay and Pinz\u00f3n-Caicedo defined a relative trisection clearly in [@CGPC1] and [@CGPC2], we are able to deal with the case of 4-manifolds with boundary in a similar way to the closed case such as a trisection diagram given by three families of curves on the central surface.\n\nFeller, Klug, Schirmer and Zemke [@FKSZ1] expressd the homology and the intersection form of a closed 4-manifold in terms of trisection diagram in the way one of three families of curves plays a key role. On the other hand, Florens and Moussard [@FM1] described the homology such that roles" +"---\nabstract: 'Intelligent reflecting surfaces (IRS) can improve the physical layer security (PLS) by providing a controllable wireless environment. In this paper, we propose a novel PLS technique with the help of IRS implemented by an intelligent mirror array for the visible light communication (VLC) system. First, for the IRS aided VLC system containing an access point (AP), a legitimate user and an eavesdropper, the IRS channel gain and a lower bound of the achievable secrecy rate are derived. Further, to enhance the IRS channel gain of the legitimate user while restricting the IRS channel gain of the eavesdropper, we formulate an achievable secrecy rate maximization problem for the proposed IRS-aided PLS technique to find the optimal orientations of mirrors. Since the sensitivity of mirrors\u2019 orientations on the IRS channel gain makes the optimization problem hard to solve, we transform the original problem into a reflected spot position optimization problem and solve it by a particle swarm optimization (PSO) algorithm. Our simulation results show that secrecy performance can be significantly improved by adding an IRS in a VLC system.'\nauthor:\n- \nbibliography:\n- 'IEEEabrv.bib'\n- 'references-Lei-paper-IRSVLC.bib'\ntitle: Secure Visible Light Communications via Intelligent Reflecting Surfaces\n---\n\nVisible light communication, physical-layer" +"---\nabstract: |\n We present the package for the simulation of DM (Dark Matter) particles in fixed target experiments. The most convenient way of this simulation (and the only possible way in the case of beam-dump) is to simulate it in the framework of the Monte-Carlo program performing the particle tracing in the experimental setup. The Geant4 toolkit framework was chosen as the most popular and versatile solution nowadays.\n\n Specifically, the package includes the codes for the simulation of the processes of DM particles production via electron and muon bremsstrahlung off nuclei, resonant in-flight positron annihilation on atomic electrons and gamma to ALP (axion-like particles) conversion on nuclei. Four types of DM mediator particles are considered: vector, scalar, pseudoscalar and axial vector. The total cross sections of bremsstrahlung processes are calculated numerically at exact tree level (ETL).\n\n The code handles both the case of invisible DM decay and of visible decay into $e^+e^-$ ($\\mu^+\\mu^-$ for $Z'$, $\\gamma \\gamma$ for ALP).\n\n The proposed extension implements native Geant4 application programming interfaces (API) designed for these needs and can be unobtrusively embedded into the existing applications.\n\n As an example of its usage, we discuss the results obtained from the simulation of a typical" +"---\nabstract: |\n We study dense packings of a large number of congruent non-overlapping circles inside a square by looking for configurations which maximize the packing density, defined as the ratio between the area occupied by the disks and the area of the square container. The search for these configurations is carried out with the help of two algorithms that we have devised: a first algorithm is in charge of obtaining sufficiently dense configurations starting from a random guess, while a second algorithm improves the configurations obtained in the first stage. The algorithms can be used sequentially or independently.\n\n The performance of these algorithms is assessed by carrying out numerical tests for configurations with a large number of circles.\nauthor:\n- |\n Paolo Amore\\\n [Facultad de Ciencias, CUICBAS, Universidad de Colima]{},\\\n [Bernal D\u00edaz del Castillo 340, Colima, Colima, Mexico]{}\\\n [paolo@ucol.mx]{}\\\n Tenoch Morales\\\n Facultad de Ciencias, Universidad de Colima,\\\n Bernal D\u00edaz del Castillo 340, Colima, Colima, Mexico\\\n tenochmorales0@gmail.com\ntitle: Efficient algorithms for the dense packing of congruent circles inside a square\n---\n\nIntroduction {#sec:intro}\n============\n\nIn this paper we consider the problem of packing $N$ congruent circles inside a square of side $L$. We ask what is the maximal density $\\rho$" +"---\nabstract: 'In this study, we have developed a new sub-MeV neutron detector that has a high position resolution, energy resolution, directional sensitivity, and low background. The detector is based on a super-fine-grained nuclear emulsion, called the Nano Imaging Tracker (NIT), and it is capable of detecting neutron induced proton recoils as tracks through topological analysis with sub-micrometric accuracy. We used a type of NIT with AgBr:I crystals of (98 $\\pm$ 10) nm size dispersed in the gelatin. First, we calibrated the performance of NIT device for detecting monochromatic neutrons with sub-MeV energy generated by nuclear fusion reactions, and the detection efficiency for recoil proton tracks of more than 2 $\\mu$m range was consistently 100% (the 1 $\\sigma$ lower limit was 83%) in accordance with expectations by manual based analysis. In addition, recoil energy and angle distribution obtained good agreement with kinematical expectation. The primary neutron energy was reconstructed by using them, and it was evaluated as 42% with FWHM at 540 keV. Furthermore, we demonstrated newly developed an automatic track recognition system dedicated to the track range of more than a few micrometers. It achieved a recognition efficiency of (74 $\\pm$ 4)%, and recoil energy and angle distribution obtained" +"---\nabstract: 'This paper presents a novel method for the automated synthesis of probabilistic programs. The starting point is a program sketch representing a finite family of finite-state Markov chains with related but distinct topologies, and a reachability specification. The method builds on a novel inductive oracle that greedily generates counter-examples (CEs) for violating programs and uses them to prune the family. These CEs leverage the semantics of the family in the form of bounds on its best- and worst-case behaviour provided by a deductive oracle using an MDP abstraction. The method further monitors the performance of the synthesis and adaptively switches between inductive and deductive reasoning. Our experiments demonstrate that the novel CE construction provides a significantly faster and more effective pruning strategy leading to an accelerated synthesis process on a wide range of benchmarks. For challenging problems, such as the synthesis of decentralized partially-observable controllers, we reduce the run-time from a day to\u00a0minutes.'\nauthor:\n- Roman Andriushchenko\n- 'Milan \u010ce\u0161ka ()'\n- |\n \\\n Sebastian Junges\n- 'Joost-Pieter Katoen'\nbibliography:\n- 'bibliography.bib'\ntitle: ' Inductive Synthesis for Probabilistic Programs Reaches New Horizons[^1] '\n---\n\nIntroduction {#sec:introduction}\n============\n\n#### Background and motivation.\n\nController synthesis for Markov decision processes" +"---\nabstract: 'The utilization of computational photography becomes increasingly essential in the medical field. Today, imaging techniques for dermatology range from two-dimensional (2D) color imagery with a mobile device to professional clinical imaging systems measuring additional detailed three-dimensional (3D) data. The latter are commonly expensive and not accessible to a broad audience. In this work, we propose a novel system and software framework that relies only on low-cost (and even mobile) commodity devices present in every household to measure detailed 3D information of the human skin with a 3D-gradient-illumination-based method. We believe that our system has great potential for early-stage diagnosis and monitoring of skin diseases, especially in vastly populated or underdeveloped areas.'\naddress: |\n $^1$Department of Computer Science, Northwestern University, Evanston, USA\\\n $^2$Pattern Recognition Lab, Friedrich-Alexander-Universit[\u00e4]{}t Erlangen-N[\u00fc]{}rnberg, Germany\\\n $^3$Department of Electrical and Computer Engineering, Northwestern University, Evanston, USA\\\n $^4$Center for Scientific Studies in the Arts, Northwestern University, Evanston, USA\\\n $^5$Department of Radiology, Northwestern University, Chicago, USA\\\n [$^*$ merlin.nau@fau.de]{} \nbibliography:\n- 'Template.bib'\ntitle: 'SkinScan: Low-cost 3D-scanning for dermatologic diagnosis and documentation'\n---\n\nTopographic Imaging, Three-Dimensional Imaging, Photometric Stereo, Dermatologic Imaging\n\nIntroduction {#sec:intro}\n============\n\n3D scanning provides access to a plethora of useful features in the diagnosis and documentation of skin" +"---\nabstract: 'We investigate the problem of computing the probability of winning in an election where voter attendance is uncertain. More precisely, we study the setting where, in addition to a total ordering of the candidates, each voter is associated with a probability of attending the poll, and the attendances of different voters are probabilistically independent. We show that the probability of winning can be computed in polynomial time for the plurality and veto rules. However, it is computationally hard (\\#P-hard) for various other rules, including $k$-approval and $k$-veto for $k>1$, Borda, Condorcet, and Maximin. For some of these rules, it is even hard to find a multiplicative approximation since it is already hard to determine whether this probability is nonzero. In contrast, we devise a fully polynomial-time randomized approximation scheme (FPRAS) for the complement probability, namely the probability of losing, for every positional scoring rule (with polynomial scores), as well as for the Condorcet rule.'\nauthor:\n- Aviram Imber\n- Benny Kimelfeld\nbibliography:\n- 'References.bib'\ntitle: Probabilistic Inference of Winners in Elections by Independent Random Voters\n---\n\nIntroduction\n============\n\nThe theory of social choice targets the question of how voter preferences should be aggregated to arrive at a collective" +"---\nabstract: 'Thanks to the increasing growth of computational power and data availability, the research in machine learning has advanced with tremendous rapidity. Nowadays, the majority of automatic decision making systems are based on data. However, it is well known that machine learning systems can present problematic results if they are built on partial or incomplete data. In fact, in recent years several studies have found a convergence of issues related to the ethics and transparency of these systems in the process of data collection and how they are recorded. Although the process of rigorous data collection and analysis is fundamental in the model design, this step is still largely overlooked by the machine learning community. For this reason, we propose a method of data annotation based on Bayesian statistical inference that aims to warn about the risk of discriminatory results of a given data set. In particular, our method aims to deepen knowledge and promote awareness about the sampling practices employed to create the training set, highlighting that the probability of success or failure conditioned to a minority membership is given by the structure of the data available. We empirically test our system on three datasets commonly accessed by" +"---\nauthor:\n- |\n PANDA collaboration\\\n G.\u00a0Barucca\n- 'F.\u00a0Dav\u00ec'\n- 'G.\u00a0Lancioni'\n- 'P.\u00a0Mengucci'\n- 'L.\u00a0Montalto'\n- 'P. P.\u00a0Natali'\n- 'N.\u00a0Paone'\n- 'D.\u00a0Rinaldi'\n- 'L.\u00a0Scalise'\n- 'B.\u00a0Krusche'\n- 'M.\u00a0Steinacher'\n- 'Z.\u00a0Liu'\n- 'C.\u00a0Liu'\n- 'B.\u00a0Liu'\n- 'X.\u00a0Shen'\n- 'S.\u00a0Sun'\n- 'G.\u00a0Zhao'\n- 'J.\u00a0Zhao'\n- 'M.\u00a0Albrecht'\n- 'W.\u00a0Alkakhi'\n- 'S.\u00a0B\u00f6kelmann'\n- 'S.\u00a0Coen'\n- 'F.\u00a0Feldbauer'\n- 'M.\u00a0Fink'\n- 'J.\u00a0Frech'\n- 'V.\u00a0Freudenreich'\n- 'M.\u00a0Fritsch'\n- 'J.\u00a0Grochowski'\n- 'R.\u00a0Hagdorn'\n- 'F.H.\u00a0Heinsius'\n- 'T.\u00a0Held'\n- 'T.\u00a0Holtmann'\n- 'I.\u00a0Keshk'\n- 'H.\u00a0Koch'\n- 'B.\u00a0Kopf'\n- 'M.\u00a0K\u00fcmmel'\n- 'M.\u00a0K\u00fc\u00dfner'\n- 'J.\u00a0Li'\n- 'L.\u00a0Linzen'\n- 'S.\u00a0Maldaner'\n- 'J.\u00a0Oppotsch'\n- 'S.\u00a0Pankonin'\n- 'M.\u00a0Peliz\u00e4us'\n- 'S.\u00a0Pfl\u00fcger'\n- 'J.\u00a0Reher'\n- 'G.\u00a0Reicherz'\n- 'C.\u00a0Schnier'\n- 'M.\u00a0Steinke'\n- 'T.\u00a0Triffterer'\n- 'C.\u00a0Wenzel'\n- 'U.\u00a0Wiedner'\n- 'H.\u00a0Denizli'\n- 'N.\u00a0Er'\n- 'U.\u00a0Keskin'\n- 'S.\u00a0Yerlikaya'\n- 'A.\u00a0Yilmaz'\n- 'R.\u00a0Beck'\n- 'V.\u00a0Chauhan'\n- 'C.\u00a0Hammann'\n- 'J.\u00a0Hartmann'\n- 'B.\u00a0Ketzer'\n- 'J.\u00a0M\u00fcllers'\n- 'B.\u00a0Salisbury'\n- 'C.\u00a0Schmidt'\n- 'U." +"---\nabstract: 'This paper presents a novel design of a soft tactile finger with omni-directional adaptation using multi-channel optical fibers for rigid-soft interactive grasping. Machine learning methods are used to train a model for real-time prediction of force, torque, and contact using the tactile data collected. We further integrated such fingers in a reconfigurable gripper design with three fingers so that the finger arrangement can be actively adjusted in real-time based on the tactile data collected during grasping, achieving the process of rigid-soft interactive grasping. Detailed sensor calibration and experimental results are also included to further validate the proposed design for enhanced grasping robustness.'\nauthor:\n- 'Linhan Yang$^{1}$, Xudong Han$^{2}$, Weijie Guo$^{2}$, Fang Wan$^{3}$, Jia Pan$^{4}$, and Chaoyang Song$^{5,*}$ [^1][^2][^3][^4][^5] [^6]'\nbibliography:\n- 'reference.bib'\ntitle: '**Learning-based Optoelectronically Innervated Tactile Finger for Rigid-Soft Interactive Grasping** '\n---\n\nsoft robotics, grasping, optical fiber, tactile sensing\n\nIntroduction {#sec:Introduction}\n============\n\nData-driven grasp learning has been a research field of growing interest in the past decade [@bohg2013data] with many literature contributing to the use of computer vision for grasp prediction [@pinto2016supersizing; @lenz2015deep; @mahler2019learning], high-resolution tactile sensing [@yuan2017gelsight; @yamaguchi2016combining], and advanced gripper design [@ma2017yale; @yuan2020design]. Many dataset have been published to support grasp learning using computer" +"---\nabstract: 'We propose a simple method for automatic speech recognition (ASR) by fine-tuning BERT, which is a language model (LM) trained on large-scale unlabeled text data and can generate rich contextual representations. Our assumption is that given a history context sequence, a powerful LM can narrow the range of possible choices and the speech signal can be used as a simple clue. Hence, comparing to conventional ASR systems that train a powerful acoustic model (AM) from scratch, we believe that speech recognition is possible by simply fine-tuning a BERT model. As an initial study, we demonstrate the effectiveness of the proposed idea on the AISHELL dataset and show that stacking a very simple AM on top of BERT can yield reasonable performance.'\naddress: |\n $^1$Nagoya University, Japan $^2$Academia Sinica, Taiwan\\\n $^3$National Taiwan University of Science and Technology, Taiwan\nbibliography:\n- 'refs.bib'\ntitle: 'SPEECH RECOGNITION BY SIMPLY FINE-TUNING BERT'\n---\n\nspeech recognition, BERT, language model\n\nIntroduction {#sec:intro}\n============\n\nConventional automatic speech recognition (ASR) systems consist of multiple separately optimized modules, including an acoustic model (AM), a language model (LM) and a lexicon. In recent years, end-to-end (E2E) ASR models have attracted much attention, due to the believe that jointly optimizing" +"---\nabstract: 'Motivated by the absence of experimental superconductivity in the metallic [$\\textit{Pm}\\overline{3}\\textit{n }$]{}phase of [AlH~3~ ]{}despite the predictions, we reanalyze its vibrational and supeconducting properties at pressures $P \\ge 99$ GPa making use of first-principles techniques. In our calculations based on the self-consistent harmonic approximation method that treats anharmonicity beyond perturbation theory, we predict a strong anharmonic correction to the phonon spectra and demonstrate that the superconducting critical temperatures predicted in previous calculations based on the harmonic approximation are strongly suppressed by anharmonicity. The electron-phonon coupling concentrates on the lowest-energy hydrogen-character optical modes at the X point of the Brillouin zone. As a consequence of the strong anharmonic enhancement of their frequency, the electron-phonon coupling is suppressed by at least a 30%. The suppression in $\\lambda$ makes $T_c$ smaller than 4.2 K above 120 GPa, which is well consistent with the experimental evidence. Our results underline that metal hydrides with hydrogen atoms in interstitial sites are subject to huge anharmonic effects.'\nauthor:\n- 'Pugeng Hou$^{1}$, Francesco Belli$^{2,3}$, Raffaello Bianco$^{3}$, Ion Errea$^{2,3,4}$'\ntitle: 'Strong Anharmonic and Quantum Effects in [$\\textit{Pm}\\overline{3}\\textit{n }$]{}[AlH~3~ ]{}Under High Pressure:A First-Principles Study'\n---\n\nIntroduction\n============\n\nMotivated by the quest for metallic and superconducting hydrogen at very" +"---\nabstract: 'We propose a new Markov chain Monte Carlo method in which trial configurations are generated by evolving a state, sampled from a prior distribution, using a Markov transition matrix. We present two prototypical algorithms and derive their corresponding acceptance rules. We first identify the important factors controlling the quality of the sampling. We then apply the method to the problem of sampling polymer configurations with fixed endpoints. Applications of the proposed method range from the design of new generative models to the improvement of the portability of specific Monte Carlo algorithms, like configurational\u2013bias schemes.'\naddress: 'Center for Nonlinear Phenomena and Complex Systems, Code Postal 231, Universit\u00e9 Libre de Bruxelles, Boulevard du Triomphe, 1050 Brussels, Belgium'\nauthor:\n- Jo\u00ebl Mabillard\n- Isha Malhotra\n- Bortolo Matteo Mognetti\nbibliography:\n- 'biblio.bib'\ntitle: Using Markov transition matrices to generate trial configurations in Markov chain Monte Carlo simulations \n---\n\nMonte Carlo methods, Mathematical physics methods, Chemical Physics & Physical Chemistry, Classical statistical mechanics, Markovian processes, Path sampling methods. This is a post-peer-review, pre-copyedit version of an article published in Computer Physics Communications. The final authenticated version is available online at: \n\nIntroduction\n============\n\nMarkov Chain Monte Carlo (MCMC) methods are portable algorithms" +"---\nabstract: 'When an individual\u2019s DNA is sequenced, sensitive medical information becomes available to the sequencing laboratory. A recently proposed way to hide an individual\u2019s genetic information is to mix in DNA samples of other individuals. We assume these samples are known to the individual but unknown to the sequencing laboratory. Thus, these DNA samples act as \u201cnoise\u201d to the sequencing laboratory, but still allow the individual to recover their own DNA samples afterward. Motivated by this idea, we study the problem of hiding a binary random variable $X$ (a genetic marker) with the additive noise provided by mixing DNA samples, using mutual information as a privacy metric. This is equivalent to the problem of finding a worst-case noise distribution for recovering $X$ from the noisy observation among a set of feasible discrete distributions. We characterize upper and lower bounds to the solution of this problem, which are empirically shown to be very close. The lower bound is obtained through a convex relaxation of the original discrete optimization problem, and yields a closed-form expression. The upper bound is computed via a greedy algorithm for selecting the mixing proportions.'\nauthor:\n- \n- \n- \nbibliography:\n- 'refs.bib'\ntitle: |\n Private DNA Sequencing:\\" +"---\nabstract: 'Ultrafast vectorially polarized pulses have found many applications in information and energy transfer owing mainly to the presence of strong longitudinal components and their space-polarization non-separability. Due to their broad spectrum, such pulses often exhibit space-time couplings, which significantly affect the pulse propagation dynamics leading to reduced energy density or utilized to create new effects like a rotating or sliding wavefront at focus. Here, we present a new method for the spatio-temporal characterization of ultrashort cylindrical vector pulses based on a combination of spatially resolved Fourier transform spectroscopy and Mach-Zehnder interferometry. The method provides access to spatially resolved spectral amplitudes and phases of all polarization components of the pulse. We demonstrate the capabilities of the method by completely characterizing a $10$\u00a0fs radially polarized pulse from a Ti:sapphire laser at $800$\u00a0nm.'\nauthor:\n- Apostolos Zdagkas\n- Venkatram Nalla\n- Nikitas Papasimakis\n- 'Nikolay I. Zheludev'\nbibliography:\n- 'termites\\_maze\\_bib.bib'\ntitle: 'Spatio-temporal characterization of ultrashort vector pulses'\n---\n\nIntroduction\n============\n\nSpace-time couplings (STCs) in propagating waves are defined as the dependence of the temporal properties of the electric field on the transverse spatial coordinates [@STCs_review2010]. Mathematically they are revealed as the non-separability of the spatial and temporal terms of" +"---\nabstract: 'We prove upper bounds on the graph diameters of polytopes in two settings. The first is a worst-case bound for polytopes defined by integer constraints in terms of the height of the integers and certain subdeterminants of the constraint matrix, which in some cases improves previously known results. The second is a smoothed analysis bound: given an appropriately normalized polytope, we add small Gaussian noise to each constraint. We consider a natural geometric measure on the vertices of the perturbed polytope (corresponding to the mean curvature measure of its polar) and show that with high probability there exists a \u201cgiant component\u201d of vertices, with measure $1-o(1)$ and polynomial diameter. Both bounds rely on spectral gaps \u2014 of a certain Schr\u00f6dinger operator in the first case, and a certain continuous time Markov chain in the second \u2014 which arise from the log-concavity of the volume of a simple polytope in terms of its slack variables.'\nauthor:\n- |\n Hariharan Narayanan[^1]\\\n TIFR Mumbai\n- |\n Rikhav Shah\\\n UC Berkeley\n- |\n Nikhil Srivastava[^2]\\\n UC Berkeley\ntitle: A Spectral Approach to Polytope Diameter\n---\n\nIntroduction\n============\n\nThe polynomial Hirsch conjecture asks whether the diameter of an arbitrary bounded polytope $P=\\{x\\in{\\mathbb{R}}^d:Ax\\le b\\}$" +"---\nabstract: 'This paper demonstrates the predictive superiority of discrete wavelet transform (DWT) over previously used methods of feature extraction in the diagnosis of epileptic seizures from EEG data. Classification accuracy, specificity, and sensitivity are used as evaluation metrics. We specifically show the immense potential of 2 combinations (DWT-db4 combined with SVM and DWT-db2 combined with RF) as compared to others when it comes to diagnosing epileptic seizures either in the balanced or the imbalanced dataset. The results also highlight that MFCC performs less than all the DWT used in this study and that, The mean-differences are statistically significant respectively in the imbalanced and balanced dataset. Finally, either in the balanced or the imbalanced dataset, the feature extraction techniques, the models, and the interaction between them have a statistically significant effect on the classification accuracy.'\naddress:\n- 'School of Mathematical Sciences, African Institute for Mathematical Sciences, Crystal Gardens, Limbe Cameroon'\n- 'School of Mathematical Sciences, Stellenbosch University, South Africa'\n- 'School of Mathematical Sciences, Rochester Institute of Technology, Rochester, NY 14623'\nauthor:\n- Cyrille Feudjio\n- Victoire Djimna Noyum\n- Younous Perieukeu Mofendjou\n- Rockefeller\n- Ernest Fokou\u00e9\nbibliography:\n- 'cas-refs.bib'\ntitle: A Novel Use of Discrete Wavelet Transform Features" +"---\nabstract: 'In this paper, we present a Model-Based Reinforcement Learning (MBRL) algorithm named *Monte Carlo Probabilistic Inference for Learning COntrol* (MC-PILCO). The algorithm relies on Gaussian Processes (GPs) to model the system dynamics and on a Monte Carlo approach to estimate the policy gradient. This defines a framework in which we ablate the choice of the following components: (i) the selection of the cost function, (ii) the optimization of policies using dropout, (iii) an improved data efficiency through the use of structured kernels in the GP models. The combination of the aforementioned aspects affects dramatically the performance of MC-PILCO. Numerical comparisons in a simulated cart-pole environment show that MC-PILCO exhibits better data efficiency and control performance w.r.t. state-of-the-art GP-based MBRL algorithms. Finally, we apply MC-PILCO to real systems, considering in particular systems with partially measurable states. We discuss the importance of modeling both the measurement system and the state estimators during policy optimization. The effectiveness of the proposed solutions has been tested in simulation and on two real systems, a Furuta pendulum and a ball-and-plate rig. MC-PILCO code is publicly available at .'\nauthor:\n- 'Fabio Amadio$^1$, Alberto Dalla Libera$^1$, Riccardo Antonello$^1$, Daniel Nikovski$^2$, Ruggero Carli$^1$, Diego Romeres$^2$ [^1]" +"---\nabstract: 'A classical result by Lov\u00e1sz asserts that two graphs\u00a0$G$ and\u00a0$H$ are isomorphic if and only if they have the same left profile, that is, for every graph\u00a0$F$, the number of homomorphisms from\u00a0$F$ to\u00a0$G$ coincides with the number of homomorphisms from\u00a0$F$ to\u00a0$H$. Dvor[\u00e1]{}k and later on Dell, Grohe, and Rattan showed that restrictions of the left profile to a class of graphs can capture several different relaxations of isomorphism, including equivalence in counting logics with a fixed number of variables (which contains fractional isomorphism as a special case) and co-spectrality (i.e., two graphs having the same characteristic polynomial). On the other side, a result by Chaudhuri and Vardi asserts that isomorphism is also captured by the right profile, that is, two graphs\u00a0$G$ and\u00a0$H$ are isomorphic if and only if for every graph\u00a0$F$, the number of homomorphisms from\u00a0$G$ to\u00a0$F$ coincides with the number of homomorphisms from\u00a0$H$ to\u00a0$F$. In this paper, we embark on a study of the restrictions of the right profile by investigating relaxations of isomorphism that can or cannot be captured by restricting the right profile to a fixed class of graphs. Our results" +"---\nabstract: 'In classification with a reject option, the classifier is allowed in uncertain cases to abstain from prediction. The classical cost-based model of a reject option classifier requires the cost of rejection to be defined explicitly. An alternative bounded-improvement model, avoiding the notion of the reject cost, seeks for a classifier with a guaranteed selective risk and maximal cover. We coin a symmetric definition, the bounded-coverage model, which seeks for a classifier with minimal selective risk and guaranteed coverage. We prove that despite their different formulations the three rejection models lead to the same prediction strategy: a Bayes classifier endowed with a randomized Bayes selection function. We define a notion of a proper uncertainty score as a scalar summary of prediction uncertainty sufficient to construct the randomized Bayes selection function. We propose two algorithms to learn the proper uncertainty score from examples for an arbitrary black-box classifier. We prove that both algorithms provide Fisher consistent estimates of the proper uncertainty score and we demonstrate their efficiency on different prediction problems including classification, ordinal regression and structured output classification.'\nauthor:\n- |\n Vojtech Franc xfrancv@fel.cvut.cz Daniel Prusa prusa@fel.cvut.cz Vaclav Voracek voracva1@fel.cvut.cz\\\n Department of Cybernetics, Faculty of Electrical Engineering\\\n Czech Technical" +"---\nabstract: 'We study the evolution of two mutually interacting games with both pairwise games as well as the public goods game on different topologies. On 2d square lattices, we reveal that the game-game interaction can promote the cooperation prevalence in all cases, and the cooperation-defection phase transitions even become absent and fairly high cooperation is expected when the interaction goes to be very strong. A mean-field theory is developed that points out new dynamical routes arising therein. Detailed analysis shows indeed that there are rich categories of interactions in either individual or bulk scenario: invasion, neutral, and catalyzed types; their combination puts cooperators at a persistent advantage position, which boosts the cooperation. The robustness of the revealed reciprocity is strengthened by the studies of model variants, including asymmetrical or time-varying interactions, games of different types, games with time-scale separation, different updating rules etc. The structural complexities of the underlying population, such as Newman\u2013Watts small world networks, Erd\u0151s\u2013R\u00e9nyi random networks, and Barab\u00e1si\u2013Albert networks, also do not alter the working of the dynamical reciprocity. In particular, as the number of games engaged increases, the cooperation level continuously improves in general. Our work thus uncovers a new class of cooperation mechanism and" +"---\nabstract: 'The paper focuses on synthesizing optimal contact curves that can be used to ensure a rolling constraint between two bodies in relative motion. We show that geodesic based contact curves generated on both the contacting surfaces are sufficient conditions to ensure rolling. The differential geodesic equations, when modified, can ensure proper disturbance rejection in case the system of interacting bodies is perturbed from the desired curve. A corollary states that geodesic curves are generated on the surface if rolling constraints are satisfied. Simulations in the context of in-hand manipulations of the objects are used as examples.'\naddress: 'Department of Mechanical Engineering, Indian Institute of Technology Delhi, India'\nauthor:\n- 'Rajesh Kumar, Sudipto Mukherjee'\nbibliography:\n- 'references.bib'\ntitle: A note on synthesizing geodesic based contact curves\n---\n\nGeodesic Curves ,Contact Curves ,Rolling\n\nIntroduction\n============\n\nRobotic grasping and manipulation is achieved through the interaction of robotic fingers with objects [@andrychowicz2018learning; @sundaralingam2018geometric; @chong1993generalized]. In order to carry out the manipulation, the robotic fingers either roll [@paljug1994control; @bicchi1995dexterous; @maekawa1995tactile], slide [@shi2017dynamic; @spiers2018variable] or stick [@chavan2018stable] on the object surface or use a combination of all three motions [@cherif1999planning]. Rolling contacts during in-hand manipulation are known to accord a larger workspace to the" +"---\nabstract: 'We propose using a computational model of the auditory cortex as a defense against adversarial attacks on audio. We apply several white-box iterative optimization-based adversarial attacks to an implementation of Amazon Alexa\u2019s HW network, and a modified version of this network with an integrated cortical representation, and show that the cortical features help defend against universal adversarial examples. At the same level of distortion, the adversarial noises found for the cortical network are always less effective for universal audio attacks.'\naddress: |\n $^1$University of Maryland, College Park MD\\\n $^2$Johns Hopkins University, Baltimore MD.\nbibliography:\n- 'refs.bib'\ntitle: Cortical Features for Defense Against Adversarial Audio Attacks\n---\n\nAdversarial attacks, cortical representation, STRF, wake-word detection\n\nIntroduction {#sec:intro}\n============\n\nAs voice assistant systems like Amazon Alexa, Google Assistant, Apple Siri and Microsoft Cortana become more ubiquitous and integrated into modern life, so does the risk of antagonists taking control of devices that we depend on. Adversarial Attacks on Audio [@fgsm] are one way that a voice assistant could be subverted to send an unwanted message, or bank transfer, or unlock a home. All the mentioned voice assistants rely on wake-word detection to initiate their automatic speech recognition (ASR) systems that give" +"---\nabstract: 'It is necessary to improve the performance of some special classes or to particularly protect them from attacks in adversarial learning. This paper proposes a framework combining cost-sensitive classification and adversarial learning together to train a model that can distinguish between protected and unprotected classes, such that the protected classes are less vulnerable to adversarial examples. We find in this framework an interesting phenomenon during the training of deep neural networks, called Min-Max property, that is, the absolute values of most parameters in the convolutional layer approach zero while the absolute values of a few parameters are significantly larger becoming bigger. Based on this Min-Max property which is formulated and analyzed in a view of random distribution, we further build a new defense model against adversarial examples for adversarial robustness improvement. An advantage of the built model is that it performs better than the standard one and can combine with adversarial training to achieve an improved performance. It is experimentally confirmed that, regarding the average accuracy of all classes, our model is almost as same as the existing models when an attack does not occur and is better than the existing models when an attack occurs. Specifically, regarding" +"---\nabstract: 'Fluctuations associated with relaxations in far-from-equilibrium regime is of fundamental interest for a large variety of systems within broad scales. Recent advances in techniques such as spectroscopy have generated the possibility for measuring the fluctuations of the mesoscopic systems in connection to the relaxation processes when driving the underlying quantum systems far from equilibrium. We present a general nonequilibrium Fluctuation-Dissipation Theorem (FDT) for quantum Markovian processes where the detailed-balance condition is violated. Apart from the fluctuations, the relaxation involves extra correlation that is governed by the quantum curl flux emerged in the far-from-equilibrium regime. Such a contribution vanishes for the thermal equilibrium, so that the conventional FDT is recovered. We finally apply the nonequilibrium FDT to the molecular junctions, elaborating the detailed-balance-breaking effects on the optical transmission spectrum. Our results have the advantage of and exceed the scope of the fluctuation-dissipation relation in the perturbative and near equilibrium regimes, and is of broad interest for the study of quantum thermodynamics.'\nauthor:\n- Zhedong Zhang\n- Xuanhua Wang\n- Jin Wang\ntitle: 'Quantum Fluctuation-Dissipation Theorem Far From Equilibrium'\n---\n\n[^1]\n\nIntroduction {#introduction .unnumbered}\n============\n\nQuantum thermodynamics is an active subject of statistical mechanics emergent from quantum mechanics. Recent exciting" +"---\nabstract: 'The communication network context in actual systems like 5G, cloud and IoT (Internet of Things), presents an ever-increasing number of users, applications and services that are highly distributed with distinct and heterogeneous communications requirements. Resource allocation in this context requires dynamic, efficient and customized solutions and Bandwidth Allocation Models (BAMs) are an alternative to support this new trend. This paper proposes the BAMSDN (Bandwidth Allocation Model through Software-Defined Networking) framework that dynamically allocates resources (bandwidth) for a MPLS (MultiProtocol Label Switching) network using a SDN (Software-Defined Networking)/OpenFlow strategy with BAM. The framework adopts an innovative implementation approach for BAM systems by controlling the MPLS network using SDN with OpenFlow. Experimental results suggest that using SDN/OpenFlow with BAM for bandwidth allocation does have effective advantages for MPLS networks requiring flexible resource sharing among applications and facilitates the migration path to a SDN/OpenFlow network.'\nauthor:\n- 'Eliseu\u00a0Torres, Rafael\u00a0Reale, Leobino\u00a0Sampaio, and\u00a0Joberto\u00a0Martins,\u00a0[^1][^2][^3][^4]'\nbibliography:\n- 'IEEEabrv.bib'\n- 'sbc-template.bib'\ntitle: 'A SDN/OpenFlow Framework for Dynamic Resource Allocation based on Bandwidth Allocation Model'\n---\n\n[Shell : Bare Demo of IEEEtran.cls for IEEE Journals]{}\n\nSDN, OpenFlow, Resource Allocation, Bandwidth Allocation Model, Dynamic Bandwidth Allocation, MPLS, MAM, RDM.\n\nIntrodu\u00e7\u00e3o {#sec:introduction}\n==========" +"---\nabstract: 'We study convergence rates of Gibbs measures, with density proportional to $e^{-f(x)/t}$, as $t \\rightarrow 0$ where $f : \\mathbb{R}^d \\rightarrow \\mathbb{R}$ admits a unique global minimum at $x^\\star$. We focus on the case where the Hessian is not definite at $x^\\star$. We assume instead that the minimum is strictly polynomial and give a higher order nested expansion of $f$ at $x^\\star$, which depends on every coordinate. We give an algorithm yielding such an expansion if the polynomial order of $x^\\star$ is no more than $8$, in connection with Hilbert\u2019s $17^{\\text{th}}$ problem. However, we prove that the case where the order is $10$ or higher is fundamentally different and that further assumptions are needed. We then give the rate of convergence of Gibbs measures using this expansion. Finally we adapt our results to the multiple well case.'\nauthor:\n- 'Pierre Bras[^1]'\ntitle: Convergence rates of Gibbs measures with degenerate minimum\n---\n\nIntroduction\n============\n\nGibbs measures and their convergence properties are often used in stochastic optimization to minimize a function defined on $\\mathbb{R}^d$. That is, let $f : \\mathbb{R}^d \\rightarrow \\mathbb{R}$ be a measurable function and let $x^\\star \\in \\mathbb{R}^d$ be such that $f$ admits a global minimum at" +"---\nabstract: 'The ethical consequences, constraints upon and regulation of algorithms arguably represent the defining challenges of our age, asking us to reckon with the rise of computational technologies whose potential to radically transforming social and individual orders and identity in unforeseen ways is already being realised. Fittingly, concurrent with the emergence of such epoch-shaping technologies has emerged a rapidly expanding and multi-disciplinary set of research disciplines focused on these very questions. As the inexorable march of computational technologies encroaches across society and academic disciplines, it is natural that diverse specialisations including computer science, moral philosophy, engineering, jurisprudence and economics should turn their attention to the algorithmic zeitgeist. Yet despite the *multi*-disciplinary impact of this *algorithmic turn*, there remains some way to go in motivating *cross*-disciplinary collaboration is crucial to advancing feasible proposals for the ethical design, implementation and regulation of algorithmic and automated systems. In this work, we provide a framework to assist cross-disciplinary collaboration by presenting a \u2018Four C\u2019s Framework\u2019 covering key computational considerations researchers across such diverse fields should consider when approaching these questions: (i) computability, (ii) complexity, (iii) consistency and (iv) controllability. In addition, we provide examples of how insights from ethics, philosophy and population ethics" +"---\nabstract: 'Indirect detection experiments typically measure the flux of annihilating dark matter (DM) particles propagating freely through galactic halos. We consider a new scenario where celestial bodies \u201cfocus\" DM annihilation events, increasing the efficiency of halo annihilation. In this setup, DM is first captured by celestial bodies, such as neutron stars or brown dwarfs, and then annihilates within them. If DM annihilates to sufficiently long-lived particles, they can escape and subsequently decay into detectable radiation. This produces a distinctive annihilation morphology, which scales as the product of the DM and celestial body densities, rather than as DM density squared. We show that this signal can dominate over the halo annihilation rate in $\\gamma$-ray observations in both the Milky Way Galactic center and globular clusters. We use *Fermi* and H.E.S.S. data to constrain the DM-nucleon scattering cross section, setting powerful new limits down to $\\sim10^{-39}~$cm$^2$ for sub-GeV DM using brown dwarfs, which is up to nine orders of magnitude stronger than existing limits. We demonstrate that neutron stars can set limits for TeV-scale DM down to about $10^{-47}~$cm$^2$.'\nauthor:\n- 'Rebecca K. Leane'\n- Tim Linden\n- Payel Mukhopadhyay\n- Natalia Toro\nbibliography:\n- 'bibliography1.bib'\ntitle: 'Celestial-Body Focused Dark Matter" +"---\nabstract: 'We show that $3$-graphs on $n$ vertices whose codegree is at least $(2/3 + o(1))n$ can be decomposed into tight cycles and admit Euler tours, subject to the trivial necessary divisibility conditions. We also provide a construction showing that our bounds are best possible up to the $o(1)$ term. All together, our results answer in the negative some recent questions of Glock, Joos, K\u00fchn, and Osthus.'\naddress:\n- 'Fachbereich Mathematik, Universit\u00e4t Hamburg, Hamburg, Germany'\n- 'The Czech Academy of Sciences, Institute of Computer Science, Pod Vod\u00e1renskou v\u011b\u017e\u00ed 2, 182 07 Prague, Czechia'\nauthor:\n- Sim\u00f3n Piga\n- 'Nicol\u00e1s Sanhueza-Matamala'\nbibliography:\n- 'euler.bib'\ntitle: 'Cycle decompositions in $3$-uniform hypergraphs'\n---\n\n=1\n\n[^1]\n\nIntroduction\n============\n\nCycle decompositions\n--------------------\n\nGiven a\u00a0$k$-uniform hypergraph\u00a0$H$, a\u00a0*decomposition of\u00a0$H$* is a collection of subgraphs of $H$ such that every edge of\u00a0$H$ is covered exactly once. When these subgraphs are all isomorphic copies of a single hypergraph\u00a0$F$ we say that it is an\u00a0*$F$-decomposition*, and that $H$ is *$F$-decomposable*. Finding decompositions of hypergraphs is one of the oldest problems in combinatorics. For instance, the well-known problem of the existence of designs and Steiner systems can be cast as the problem of" +"---\nabstract: 'We show that a properly stratified algebra is Gorenstein if and only if the characteristic tilting module coincides with the characteristic cotilting module. We further show that properly stratified Gorenstein algebras $A$ enjoy strong homological properties such as all Gorenstein projective modules being properly stratified and all endomorphism rings ${\\operatorname{End}}_A(\\Delta(i))$ being Frobenius algebras. We apply our results to the study of properly stratified algebras that are minimal Auslander-Gorenstein algebras in the sense of Iyama-Solberg and calculate under suitable conditions their Ringel duals. This applies in particular to all centraliser algebras of nilpotent matrices.'\naddress:\n- 'Institute of algebra and number theory, University of Stuttgart, Pfaffenwaldring 57, 70569 Stuttgart, Germany'\n- 'Institute of algebra and number theory, University of Stuttgart, Pfaffenwaldring 57, 70569 Stuttgart, Germany'\nauthor:\n- Tiago Cruz\n- Ren\u00e9 Marczinzik\nbibliography:\n- 'ref.bib'\ntitle: On properly stratified Gorenstein algebras\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nQuasi-hereditary algebras constitute an important class of finite dimensional algebras including many well-studied algebras such as algebras of global dimension at most two, Schur algebras, see for example [@D] or [@Gre], and blocks of category $\\mathcal{O}$, see for example [@H].\n\nStandardly stratified algebras were introduced as a generalisation of quasi-hereditary algebras in [@CPS]." +"---\nabstract: 'The paths leading to future networks are pointing towards a data-driven paradigm to better cater to the explosive growth of mobile services as well as the increasing heterogeneity of mobile devices, many of which generate and consume large volumes and variety of data. These paths are also hampered by significant challenges in terms of security, privacy, services provisioning, and network management. Blockchain, which is a technology for building distributed ledgers that provide an immutable log of transactions recorded in a distributed network, has become prominent recently as the underlying technology of cryptocurrencies and is revolutionizing data storage and processing in computer network systems. For future data-driven networks (DDNs), blockchain is considered as a promising solution to enable the secure storage, sharing, and analytics of data, privacy protection for users, robust, trustworthy network control, and decentralized routing and resource managements. However, many important challenges and open issues remain to be addressed before blockchain can be deployed widely to enable future DDNs. In this article, we present a survey on the existing research works on the application of blockchain technologies in computer networks, and identify challenges and potential solutions in the applications of blockchains in future DDNs. We identify application" +"---\nabstract: 'Accurate channel estimation is essential for achieving the performance gains offered by reconfigurable intelligent surface (RIS)-aided wireless communications. A variety of channel estimation methods have been proposed for such systems; however, none of the existing methods takes into account the effect of synchronization errors such as carrier frequency offset (CFO). In general, CFO can significantly degrade the channel estimation performance of orthogonal frequency division multiplexing (OFDM) systems. Motivated by this, we investigate the effect of CFO on channel estimation for RIS-aided OFDM systems. Furthermore, we propose a joint CFO and channel impulse response (CIR) estimation method for these systems. Simulation results demonstrate the effectiveness of our proposed method, and also demonstrate that the use of time-domain rather than frequency-domain estimation in this context results in an improvement in the mean-squared error (MSE) performance of channel estimation as well as in a significantly lower overall computational complexity.'\nauthor:\n- 'Sumin\u00a0Jeong,\u00a0 Arman Farhang,\u00a0 Nemanja\u00a0Stefan\u00a0Perovi\u0107,\u00a0 and\u00a0Mark\u00a0F.\u00a0Flanagan,\u00a0 [^1][^2][^3]'\ntitle: 'Low-Complexity Joint CFO and Channel Estimation for RIS-aided OFDM Systems'\n---\n\n[IEEE WIRELESS COMMUNICATIONS LETTERS]{}\n\nReconfigurable intelligent surface (RIS), channel estimation, carrier frequency offset (CFO).\n\nIntroduction\n============\n\nwireless communication systems, the channel is usually considered to be" +"---\nabstract: 'Advances in artificial intelligence are driven by technologies inspired by the brain, but these technologies are orders of magnitude less powerful and energy efficient than biological systems. Inspired by the nonlinear dynamics of neural networks, new unconventional computing hardware has emerged with the potential to exploit natural phenomena and gain efficiency, in a similar manner to biological systems. Physical reservoir computing demonstrates this with a variety of unconventional systems, from optical-based to memristive systems. Reservoir computers provide a nonlinear projection of the task input into a high-dimensional feature space by exploiting the system\u2019s internal dynamics. A trained readout layer then combines features to perform tasks, such as pattern recognition and time-series analysis. Despite progress, achieving state-of-the-art performance without external signal processing to the reservoir remains challenging. Here we perform an initial exploration of three magnetic materials in thin-film geometries via micro-scale simulation. Our results reveal that basic spin properties of magnetic films generate the required nonlinear dynamics and memory to solve machine learning tasks (although there would be practical challenges in exploiting these particular materials in physical implementations). The method of exploration can be applied to other materials, so this work opens up the possibility of testing different" +"---\nabstract: 'The importance of state estimation in fluid mechanics is well-established; it is required for accomplishing several tasks, including design/optimization, active control, and future state prediction. A common tactic in this regard is to rely on reduced-order models. Such approaches, in general, use measurement data of a one-time instance. However, often data available from sensors is sequential, and ignoring it results in information loss. In this paper, we propose a novel deep learning-based state estimation framework that learns from sequential data. The proposed model structure consists of the recurrent cell to pass information from different time steps, enabling this information to recover the full state. We illustrate that utilizing sequential data allows for state recovery from minimal and noisy sensor measurements. For efficient recovery of the state, the proposed approach is coupled with an auto-encoder based reduced-order model. We illustrate the performance of the proposed approach using three examples, and it is found to outperform other alternatives existing in the literature.'\nauthor:\n- |\n Yash Kumar\\\n Department of Mechanical Engineering\\\n Delhi Technological University\\\n Shahbad Daulatpur, Main Bawana Road, Delhi-110042, India\\\n `yashk8481@gmail.com`\\\n Pranav Bahl\\\n Department of Mechanical Engineering\\\n Delhi Technological University\\\n Shahbad Daulatpur, Main Bawana Road, Delhi-110042, India\\\n `bahlpranav24@gmail.com`\\\n Souvik" +"---\nbibliography:\n- 'defREF.bib'\n- 'SYMdefect.bib'\ntitle: ' Defect $a$-Theorem and $a$-Maximization '\n---\n\n[preprint]{}\n\n[by1CMSAnumberedinst[numberedinstto 0pt[-5pt${}^{\\the\\instnum}$]{}Center of Mathematical Sciences and Applications, Harvard University, Cambridge, MA 02138, USA]{}unnumberedinst[unnumberedinstCenter of Mathematical Sciences and Applications, Harvard University, Cambridge, MA 02138, USA]{}]{} [by1HUnumberedinst[numberedinstto 0pt[-5pt${}^{\\the\\instnum}$]{}Jefferson Physical Laboratory, Harvard University, Cambridge, MA 02138, USA]{}unnumberedinst[unnumberedinstJefferson Physical Laboratory, Harvard University, Cambridge, MA 02138, USA]{}]{}\n\n[authors[Yifan Wang[${}^{\\CMSA,\\HU}$instused[yes]{}]{}]{}]{}\n\nabstract\n\nConformal defects describe the universal behaviors of a conformal field theory (CFT) in the presence of a boundary or more general impurities. The coupled critical system is characterized by new conformal anomalies which are analogous to, and generalize those of standalone CFTs. Here we study the conformal $a$- and $c$-anomalies of four dimensional defects in CFTs of general spacetime dimensions greater than four. We prove that under unitary defect renormalization group (RG) flows, the defect $a$-anomaly must decrease, thus establishing the defect $a$-theorem. For conformal defects preserving minimal supersymmetry, the full defect symmetry contains a distinguished $U(1)_R$ subgroup. We derive the anomaly multiplet relations that express the defect $a$- and $c$-anomalies in terms of the defect (mixed) \u2019t Hooft anomalies for this $U(1)_R$ symmetry. Once the $U(1)_R$ symmetry is identified using the defect $a$-maximization principle which we prove, this enables" +"---\nabstract: 'Starlight subtraction algorithms based on the method of Karhunen-Lo\u00e8ve eigenimages have proved invaluable to exoplanet direct imaging. However, they scale poorly in runtime when paired with differential imaging techniques. In such observations, reference frames and frames to be starlight-subtracted are drawn from the same set of data, requiring a new subset of references (and eigenimages) for each frame processed to avoid self-subtraction of the signal of interest. The data rates of extreme adaptive optics instruments are such that the only way to make this computationally feasible has been to downsample the data. We develop a technique that updates a pre-computed singular value decomposition of the full dataset to remove frames (i.e. a \u201cdowndate\u201d) without a full recomputation, yielding the modified eigenimages. This not only enables analysis of much larger data volumes in the same amount of time, but also exhibits near-linear scaling in runtime as the number of observations increases. We apply this technique to archival data and investigate its scaling behavior for very large numbers of frames $N$. The resulting algorithm provides speed improvements of $2.6\\times$ (for 200 eigenimages at $N = 300$) to $140 \\times$ (at $N = 10^4$) with the advantage only increasing as $N$" +"---\nabstract: 'The near vanishing of the cosmological constant is one of the most puzzling open problems in theoretical physics. We consider a system, the so-called framid, that features a technically similar problem. Its stress-energy tensor has a Lorentz-invariant expectation value on the ground state, yet there are no standard, symmetry-based selection rules enforcing this, since the ground state spontaneously breaks boosts. We verify the Lorentz invariance of the expectation value in question with explicit one-loop computations. These, however, yield the expected result only thanks to highly nontrivial cancellations, which are quite mysterious from the low-energy effective theory viewpoint.'\nbibliography:\n- 'library.bib'\n---\n\n[graphs]{}\n\n\\\n\\\n\nIntroduction\n============\n\nThe cosmological constant problem has been a topic of heated debate for decades. In fact, it is difficult to find two theoretical physicists who agree on what exactly the problem is, how to phrase it, how to quantify it, or how many problems there are. Some maintain that there is no problem at all. (We realize that some of our colleagues will take issue with this paragraph as well.)\n\nWe do not intend to enter the debate ourselves, nor to review the extensive literature on the subject [^1]. Our aim with this" +"---\nabstract: 'We review the properties of neutron matter in the low-density regime. In particular, we revise its ground state energy and the superfluid neutron pairing gap, and analyze their evolution from the weak to the strong coupling regime. The calculations of the energy and the pairing gap are performed, respectively, within the Brueckner\u2013Hartree\u2013Fock approach of nuclear matter and the BCS theory using the chiral nucleon-nucleon interaction of Entem and Machleidt at N$^3$LO and the Argonne V18 phenomenological potential. Results for the energy are also shown for a simple Gaussian potential with a strength and range adjusted to reproduce the $^1S_0$ neutron-neutron scattering length and effective range. Our results are compared with those of quantum Monte Carlo calculations for neutron matter and cold atoms. The Tan contact parameter in neutron matter is also calculated finding a reasonable agreement with experimental data with ultra-cold atoms only at very low densities. We find that low-density neutron matter exhibits a behavior close to that of a Fermi gas at the unitary limit, although, this limit is actually never reached. We also review the properties (energy, effective mass and quasiparticle residue) of a spin-down neutron impurity immersed in a low-density free Fermi gas of" +"---\nabstract: 'In our article\u00a0[@AubrunKari2013] we state the the Domino problem is undecidable for all Baumslag-Solitar groups $BS(m,n)$, and claim that the proof is a direct adaptation of the construction of a weakly aperiodic subshift of finite type for $BS(m,n)$ given in the paper. In this addendum, we clarify this point and give a detailed proof of the undecidability result. We assume the reader is already familiar with the article\u00a0[@AubrunKari2013].'\nauthor:\n- Nathalie Aubrun\n- Jarkko Kari\nbibliography:\n- 'biblio.bib'\ntitle: 'Addendum to \u201c*Tilings problems on Baumslag-Solitar groups*\u201d'\n---\n\nIntroduction {#section.introduction .unnumbered}\n============\n\nIn\u00a0[@AubrunKari2013] we state as a direct corollary of the main construction that the Domino problem is undecidable on all Baumslag-Solitar groups $BS(m,n)$. It turns out that it is not as immediate as we write it, and we believe that this result deserves a full explanation.\n\nThe proof is based on the proof of the undecidability of the Domino problem on the discrete hyperbolic plane given by the second author in\u00a0[@Kari2007].This latter is an adaptation of a former construction of a strongly aperiodic SFT on\u00a0$\\Z^2$\u00a0[@Kari1996]. This proof proceeds by reduction to the immortality problem for rational piecewise affine maps. We first recall" +"---\nauthor:\n- Marius Gerbershagen\nbibliography:\n- 'bibliography.bib'\ntitle: Monodromy methods for torus conformal blocks and entanglement entropy at large central charge\n---\n\nIntroduction {#sec:introduction}\n============\n\nEntanglement entropy is a measure for the amount of entanglement between two parts of a quantum system. It is defined as the von Neumann entropy of the reduced density matrix $\\rho_A$ for a subsystem $A$. In general, the entanglement entropy depends on details of the theory and state in question such as the spectrum and operator content. However, certain universal features are common to all quantum field theories. For example, the leading order divergence in the UV cutoff usually scales with the area of the boundary of the subregion $A$ [@Bombelli:1986rw; @Srednicki:1993im]. Conformal field theories in two dimensions admit more general universal features. In particular, the entanglement entropy of a single interval $A$ at zero temperature is given by [@Calabrese:2004eu] $$S_A = \\frac{c}{3} \\log(l/\\epsilon_\\text{UV}),$$ depending only on the central charge, irrespective of any other details such as the OPE coefficients or the spectrum of the theory. For subsystems $A$ consisting of multiple intervals, the entanglement entropy is no longer universal for any CFT. However, as shown in [@Hartman:2013mia], in the semiclassical large central charge" +"---\nauthor:\n- 'P. Stephenson'\n- 'M. Galand'\n- 'P. D. Feldman'\n- 'A. Beth'\n- 'M. Rubin'\n- 'D. Bockel\u00e9e-Morvan'\n- 'N. Biver'\n- 'Y.-C Cheng'\n- 'J. Parker'\n- 'J. Burch'\n- 'F. L. Johansson'\n- 'A. Eriksson'\nbibliography:\n- 'my\\_collection.bib'\ndate: 'Received XXXX / Accepted XXXX'\ntitle: 'Multi-instrument analysis of far-ultraviolet aurora in the southern hemisphere of comet 67P/Churyumov-Gerasimenko'\n---\n\nIntroduction {#sec: Intro}\n============\n\nAuroras, most familiarly observed at high latitudes over the northern and southern regions of Earth, have been detected at bodies in the Solar System. Auroral emissions are generated by (usually charged) extra-atmospheric particles colliding with an atmosphere, causing excitation [@Galand2002]. At Earth, other magnetised planets, and the Jovian moon Ganymede, the magnetospheric structure restricts entry of these extra-atmospheric particles into the atmosphere, confining auroras to regions with open field lines. However, comets are unmagnetised [@Heinisch2019], so they exhibit more similarities to regions of Mars with no crustal magnetisation, where diffuse auroras have been seen [@Schneider2015]. The Rosetta mission [@Glassmeier2007] observed comet 67P/Churyumov-Gerasimenko (hereafter 67P) from within the coma throughout the two-year escort phase, allowing measurement of cometary emissions from a new, close perspective. Earth-based observations of comets in the far-ultraviolet (FUV) with the" +"---\nabstract: '**Abstract** We study the buckling of a one fiber composite whose matrix stiffness is slightly dependent on the compressive force. We show that the equilibrium curves of the system exhibit a limit load when the induced stiffness parameter gets bigger than a threshold. This limit load increases when increasing the stiffness parameter and is related to a possible localized path in the post-buckling domain. Such a change in the maximum load may be very desirable from a structural stand point.'\nauthor:\n- 'R. Lagrange'\nbibliography:\n- 'Biblio.bib'\ntitle: 'Compression-induced stiffness in the buckling of a one fiber composite'\n---\n\nIntroduction\n============\n\nImportant engineering applications such as railway tracks lying on a soil base, thin metal strips attached to a softer substrate or structures floating on fluids require accurate modeling of a layer bonded to a substrate-foundation. [@Shield94] and more recently [@Bigoni2008; @Sun2012] have shown that a beam theory model for the layer and a Winkler-type springs model for the foundation is accurate enough to correctly describe the layer-substrate system. The restoring force provided by the springs may depend linearly or nonlinearly on the local displacement. Many analytical or numerical analysis considered the mechanical response of a straight elastica" +"---\nabstract: 'The delay-time distribution (DTD) is the occurrence rate of a class of objects as a function of time after a hypothetical burst of star formation. DTDs are mainly used as a statistical test of stellar evolution scenarios for supernova progenitors, but they can be applied to many other classes of astronomical objects. We calculate the first DTD for RR Lyrae variables using 29,810 RR Lyrae from the OGLE-IV survey and a map of the stellar-age distribution (SAD) in the Large Magellanic Cloud (LMC). We find that $\\sim 46\\%$ of the OGLE-IV RR Lyrae are associated with delay-times older than 8 Gyr (main-sequence progenitor masses less than 1 M$_{\\odot}$), and consistent with existing constraints on their ages, but surprisingly about $51\\%$ of RR Lyrae appear have delay times $1.2-8$ Gyr (main-sequence masses between $1 - 2$ M$_{\\odot}$ at LMC metallicity). This intermediate-age signal also persists outside the Bar-region where crowding is less of a concern, and we verified that without this signal, the spatial distribution of the OGLE-IV RR Lyrae is inconsistent with the SAD map of the LMC. Since an intermediate-age RR Lyrae channel is in tension with the lack of RR Lyrae in intermediate-age clusters (noting issues" +"---\nabstract: |\n 1.2pc In this paper, we extend the reinterpreted discrete fracture model for flow simulation of fractured porous media containing flow blocking barriers on non-conforming meshes. The methodology of the approach is to modify the traditional Darcy\u2019s law into the hybrid-dimensional Darcy\u2019s law where fractures and barriers are represented as Dirac-$\\delta$ functions contained in the permeability tensor and resistance tensor, respectively. As a natural extension of the reinterpreted discrete fracture model [@paper] for highly conductive fractures, this model is able to account for the influence of both highly conductive fractures and blocking barriers accurately on non-conforming meshes. The local discontinuous Galerkin (LDG) method is employed to accommodate the form of the hybrid-dimensional Darcy\u2019s law and the nature of the pressure/flux discontinuity. The performance of the model is demonstrated by several numerical tests.\n\n **Key Words:** hybrid-dimensional Darcy\u2019s law, discrete fracture model, fracture and barrier networks, non-conforming meshes, local discontinuous Garlerkin methods\nauthor:\n- 'Ziyao Xu[^1], Zhaoqin Huang[^2], Yang Yang[^3]'\ntitle: '[The Hybrid-dimensional Darcy\u2019s Law: A Reinterpreted Discrete Fracture Model for Fracture and Barrier Networks on Non-conforming Meshes]{} [^4]'\n---\n\n1.2pc\n\nIntroduction\n============\n\nFractures are ubiquitous in crustal rocks as a result of geological processes such as jointing and faulting," +"---\nabstract: 'Future deep learning systems call for techniques that can deal with the evolving nature of temporal data and scarcity of annotations when new problems occur. As a step towards this goal, we present FUSION (Few-shot UnSupervIsed cONtinual learning), a learning strategy that enables a neural network to learn quickly and continually on streams of unlabelled data and unbalanced tasks. The objective is to maximise the knowledge extracted from the unlabelled data stream (unsupervised), favor the forward transfer of previously learnt tasks and features (continual) and exploit as much as possible the supervised information when available (few-shot). The core of FUSION is MEML - Meta-Example Meta-Learning \u2013 that consolidates a meta-representation through the use of a self-attention mechanism during a single inner loop in the meta-optimisation stage. To further enhance the capability of MEML to generalise from few data, we extend it by creating various augmented surrogate tasks and by optimising over the hardest. An extensive experimental evaluation on public computer vision benchmarks shows that FUSION outperforms existing state-of-the-art solutions both in the few-shot and continual learning experimental settings. \u00a0'\nauthor:\n- Alessia Bertugli\n- Stefano Vincenzi\n- Simone Calderara\n- Andrea Passerini\nbibliography:\n- 'egbib.bib'\ntitle: 'Generalising via" +"---\nabstract: 'Three-dimensional cardiovascular fluid dynamics simulations typically require computation of several cardiac cycles before they reach a periodic solution, rendering them computationally expensive. Furthermore, there is currently no standardized method to determined whether a simulation has yet reached that periodic state. In this work, we propose use of the asymptotic error measure to quantify the difference between simulation results and their ideal periodic state using lumped-parameter modeling. We further show that initial conditions are crucial in reducing computational time and develop an automated framework to generate appropriate initial conditions from a one-dimensional model of blood flow. We demonstrate the performance of our initialization method using six patient-specific models from the Vascular Model Repository. In our examples, our initialization protocol achieves periodic convergence within one or two cardiac cycles, leading to a significant reduction in computational cost compared to standard methods. All computational tools used in this work are implemented in the open-source software platform SimVascular. Automatically generated initial conditions have the potential to significantly reduce computation time in cardiovascular fluid dynamics simulations.'\nauthor:\n- 'Martin\u00a0R.\u00a0Pfaller, Jonathan\u00a0Pham, Nathan\u00a0M.\u00a0Wilson, David\u00a0W.\u00a0Parker, Alison\u00a0L.\u00a0Marsden'\nbibliography:\n- 'references.bib'\ntitle: On the periodicity of cardiovascular fluid dynamics" +"---\nabstract: 'In this paper, the problem of enhancing the quality of virtual reality (VR) services is studied for an indoor terahertz (THz)/visible light communication (VLC) wireless network. In the studied model, small base stations (SBSs) transmit high-quality VR images to VR users over THz bands and light-emitting diodes (LEDs) provide accurate indoor positioning services for them using VLC. Here, VR users move in real time and their movement patterns change over time according to their applications, where both THz and VLC links can be blocked by the bodies of VR users. To control the energy consumption of the studied THz/VLC wireless VR network, VLC access points (VAPs) must be selectively turned on so as to ensure accurate and extensive positioning for VR users. Based on the user positions, each SBS must generate corresponding VR images and establish THz links without body blockage to transmit the VR content. The problem is formulated as an optimization problem whose goal is to maximize the average number of successfully served VR users by selecting the appropriate VAPs to be turned on and controlling the user association with SBSs. To solve this problem, a policy gradient-based reinforcement learning (RL) algorithm that adopts a meta-learning" +"---\nabstract: 'Among different aspects of social networks, dynamics have been proposed to simulate how opinions can be transmitted. In this study, we propose a model that simulates the communication in an online social network, in which the posts are created from external information. We considered the nodes and edges of a network as users and their friendship, respectively. A real number is associated with each user representing its opinion. The dynamics starts with a user that has contact with a random opinion, and, according to a given probability function, this individual can post this opinion. This step is henceforth called *post transmission*. In the next step, called *post distribution*, another probability function is employed to select the user\u2019s friends that could see the post. Post transmission and distribution represent the user and the social network algorithm, respectively. If an individual has contact with a post, its opinion can be attracted or repulsed. Furthermore, individuals that are repulsed can change their friendship through a rewiring. These steps are executed various times until the dynamics converge. Several impressive results were obtained, which include the formation of scenarios of polarization and consensus of opinions. In the case of echo chambers, the possibility" +"---\nauthor:\n- Cong Fang\n- Hangfeng He\n- Qi Long\n- 'Weijie J.\u00a0Su'\nbibliography:\n- 'reference.bib'\ntitle: 'Exploring Deep Neural Networks via Layer-Peeled Model: Minority Collapse in Imbalanced Training'\n---\n\nIntroduction {#sec:intro}\n============\n\nIn the past decade, deep learning has achieved remarkable performance across a range of scientific and engineering domains [@krizhevsky2017imagenet; @lecun2015deep; @silver2016mastering]. Interestingly, these impressive accomplishments were mostly achieved by heuristics and tricks, though often plausible, without much principled guidance from a theoretical perspective. On the flip side, however, this reality suggests the great potential a theory could have for advancing the development of deep learning methodologies in the coming decade.\n\nUnfortunately, it is not easy to develop a theoretical foundation for deep learning. Perhaps the most difficult hurdle lies in the nonconvexity of the optimization problem for training neural networks, which, loosely speaking, stems from the interaction between different layers of neural networks. To be more precise, consider a neural network for $K$-class classification (in logits), which in its simplest form reads[^1] $${\\bm{f}}({{\\bm}{x}}; {\\bm{W}_{\\textnormal{full}}}) = \\bm{b}_L + {{\\bm}{W}}_L \\sigma \\left( \\bm{b}_{L-1} + {{\\bm}{W}}_{L-1} \\sigma(\\cdots \\sigma(\\bm{b}_1 + {{\\bm}{W}}_1 {{\\bm}{x}}) \\cdots ) \\right).\n$$ Here, ${\\bm{W}_{\\textnormal{full}}}:= \\{{{\\bm}{W}}_1, {{\\bm}{W}}_2, \\ldots, {{\\bm}{W}}_L\\}$ denotes the weights of the $L$ layers," +"---\nabstract: 'We study the surface elastic response of pure Ni, the random alloy FeNiCr and an average FeNiCr alloy in terms of the surface lattice Green\u2019s function. We propose a scheme for computing per-site Green\u2019s function and study their per-site variations. The average FeNiCr alloys accurate reproduces the mean Green\u2019s function of the full random alloy. Variation around this mean is largest near the edge of the surface Brillouin-zone and decays as $q^{-2}$ with wavevector $q$ towards the $\\Gamma$-point. We also present expressions for the continuum surface Green\u2019s function of anisotropic solids of finite and infinite thickness and show that the atomistic Green\u2019s function approaches continuum near the $\\Gamma$-point. Our results are a first step towards efficient contact calculations and Peierls-Nabarro type models for dislocations in high-entropy alloys.'\nauthor:\n- 'Wolfram G. N\u00f6hring'\n- Jan Grie\u00dfer\n- Patrick Dondl\n- Lars Pastewka\nbibliography:\n- 'gf.bib'\ntitle: 'Surface lattice Green\u2019s functions for high-entropy alloys'\n---\n\nIntroduction\n============\n\nAtomistic simulations are routinely used to study atomic-scale details of elastic or plastic deformation of materials\u00a0[@tadmor_modeling_2011]. Frequently the number of atoms which are needed to resolve the most important details is small in comparison to the number of atoms that must be" +"---\nabstract: 'Superconducting circuit quantum electrodynamics (QED) architecture composed of superconducting qubit and resonator is a powerful platform for exploring quantum physics and quantum information processing. By employing techniques developed for superconducting quantum computing, we experimentally investigate phase-sensitive Landau-Zener-St\u00fcckelberg (LZS) interference phenomena in a circuit QED. Our experiments cover a large range of LZS transition parameters, and demonstrate the LZS induced Rabi-like oscillation as well as phase-dependent steady-state population.'\nauthor:\n- 'Zhi-Xuan Yang'\n- 'Yi-Meng Zhang'\n- 'Yu-Xuan Zhou'\n- 'Li-Bo Zhang'\n- Fei Yan\n- Song Liu\n- Yuan Xu\n- Jian Li\ntitle: 'Phase sensitive Landau-Zener-St\u00fcckelberg interference in superconducting quantum circuit'\n---\n\nIntroduction\n============\n\nLandau-Zener-St\u00fcckelberg (LZS) interference is a phenomenon that appears in a two-level quantum system undergoing periodic transitions between energy levels at the anticross.[@nori_review] Since the pioneering theoretical work by Landau,[@landau1; @landau2] Zener,[@zener1] St\u00fcckelberg,[@stuckelberg1] and Majorana [@majorana1] in 1932, in the past nearly 90 years profound explorations on LZS interference and related problems have been carried out both theoretically and experimentally. Especially entered the 21st century, LZS interference and related researches become active again, thanks to the enormous progresses in experimental technologies for creating and manipulating coherent quantum states in solid-state systems. LZS interference patterns have" +"---\nabstract: |\n We consider the derivative nonlinear Schr\u00f6dinger equation in one space dimension, posed both on the line and on the circle. This model is known to be completely integrable and $L^2$-critical with respect to scaling.\n\n The first question we discuss is whether ensembles of orbits with $L^2$-equicontinuous initial data remain equicontinuous under evolution. We prove that this is true under the restriction $M(q)=\\int |q|^2 < 4\\pi$. We conjecture that this restriction is unnecessary.\n\n Further, we prove that the problem is globally well-posed for initial data in $H^{1/6}$ under the same restriction on $M$. Moreover, we show that this restriction would be removed by a successful resolution of our equicontinuity conjecture.\naddress:\n- 'Department of Mathematics, University of California, Los Angeles, CA 90095, USA'\n- 'Department of Mathematics, Rice University, Houston, TX 77005-1892, USA'\n- 'Department of Mathematics, University of California, Los Angeles, CA 90095, USA'\nauthor:\n- Rowan Killip\n- Maria Ntekoume\n- Monica Vi\u015fan\nbibliography:\n- 'bibliography.bib'\ntitle: |\n On the well-posedness problem for the\\\n derivative nonlinear Schr\u00f6dinger equation\n---\n\nIntroduction {#sec;introduction}\n============\n\nThe derivative nonlinear Schr\u00f6dinger equation $$\\label{DNLS} \\tag{DNLS}\n i q_t + q'' +i \\left(|q|^2 q\\right)'=0$$ describes the evolution of a complex-valued field $q$ defined either" +"---\nabstract: 'Due to its rising importance in science and technology in recent years, particle tracking in videos presents itself as a tool for successfully acquiring new knowledge in the field of life sciences and physics. Accordingly, different particle tracking methods for various scenarios have been developed. In this article, we present a particle tracking application implemented in Python for, in particular, spherical magnetic particles, including superparamagnetic beads and Janus particles. In the following, we distinguish between two sub-steps in particle tracking, namely the localization of particles in single images and the linking of the extracted particle positions of the subsequent frames into trajectories. We provide an intensity-based localization technique to detect particles and two linking algorithms, which apply either frame-by-frame linking or linear assignment problem solving. Beyond that, we offer helpful tools to preprocess images automatically as well as estimate parameters required for the localization algorithm by utilizing machine learning. As an extra, we have implemented a technique to estimate the current spatial orientation of Janus particles within the x-y-plane. Our framework is readily extendable and easy-to-use as we offer a graphical user interface and a command-line tool. Various output options, such as data frames and videos, ensure further" +"---\nabstract: 'A fully discrete finite element method, based on a new weak formulation and a new time-stepping scheme, is proposed for the surface diffusion flow of closed curves in the two-dimensional plane. It is proved that the proposed method can preserve two geometric structures simultaneously at the discrete level, i.e., the perimeter of the curve decreases in time while the area enclosed by the curve is conserved. Numerical examples are provided to demonstrate the convergence of the proposed method and the effectiveness of the method in preserving the two geometric structures.'\naddress:\n- 'School of Mathematics and Statistics $\\&$ Hubei Key Laboratory of Computational Science, Wuhan University, Wuhan 430072, P. R. China.'\n- 'Department of Applied Mathematics, The Hong Kong Polytechnic University, Hong Kong.'\nauthor:\n- Wei Jiang\n- Buyang Li\nbibliography:\n- 'mybib.bib'\ntitle: 'A perimeter-decreasing and area-conserving algorithm for surface diffusion flow of curves'\n---\n\nsurface diffusion flow ,area conservation ,perimeter decrease ,parametric ,weak formulation ,time stepping ,finite element method 35Q55 ,65M70 ,65N25 ,65N35 ,81Q05\n\nIntroduction {#sec:intro}\n============\n\nThis article concerns the numerical approximation to the surface diffusion flow of closed curves in the two-dimensional plane, i.e., the evolution of a curve $$\\Gamma[\\mathbf X(\\cdot,t)] = \\{\\mathbf X(\\xi,t):" +"---\nabstract: 'In this set of notes, a complete, pedagogical tutorial for applying mean field theory to the two-dimensional Ising model is presented. Beginning with the motivation and basis for mean field theory, we formally derive the Bogoliubov inequality and discuss mean field theory itself. We proceed with the use of mean field theory to determine a magnetisation function, and the results of the derivation are interpreted graphically, physically, and mathematically. We give a new interpretation of the self-consistency condition in terms of intersecting surfaces and constrained solution sets. We also include some more general comments on the thermodynamics of the phase transition. We end by evaluating symmetry considerations in magnetisation, and some more subtle features of the Ising model. Together, a self-contained overview of the mean field Ising model is given, with some novel presentation of important results.'\nauthor:\n- Dalton A R Sakthivadivel\nbibliography:\n- 'main.bib'\ntitle: MAGNETISATION AND MEAN FIELD THEORY IN THE ISING MODEL\n---\n\nIntroduction\n============\n\nThe Ising model is a model of the lattice of particles constituting the atomic structure of a magnetic metal. The model takes a metallic element as being composed of a $d$-dimensional regular lattice $\\Lambda$ of atoms, and these atoms" +"---\nabstract: 'A permutation $\\pi$ contains a pattern $\\sigma$ if and only if there is a subsequence in $\\pi$ with its letters are in the same relative order as those in $\\sigma$. Partially ordered patterns (POPs) provide a convenient way to denote patterns in which the relative order of some of the letters does not matter. This paper elucidates connections between the avoidance sets of a few POPs with other combinatorial objects, directly answering five open questions posed by Gao and Kitaev [@gao-kitaev-2019]. This was done by thoroughly analysing the avoidance sets and developing recursive algorithms to derive these sets and their corresponding combinatorial objects in parallel, which yielded a natural bijection. We also analysed an avoidance set whose simple permutations are enumerated by the Fibonacci numbers and derived an algorithm to obtain them recursively.'\naddress:\n- 'Department of Mathematics & Statistics, Queens University, 48 University Ave. Jeffery Hall Kingston, ON Canada K7L 3N6'\n- 'Department of Mathematics & Computer Science, Royal Military College of Canada, P.O.Box 17000, Station Forces, Kingston, Ontario, Canada K7K 7B4'\n- 'Department of Mathematics & Computer Science, Royal Military College of Canada, P.O.Box 17000, Station Forces, Kingston, Ontario, Canada K7K 7B4'\nauthor:\n- Kai Ting" +"---\nabstract: 'We have used archival infrared images obtained with the Wide Field Camera 3 on board the Hubble Space Telescope to constrain the initial mass function of low-mass stars and brown dwarfs in the W3 star-forming region. The images cover 438\u00a0arcmin$^2$, which encompasses the entire complex, and were taken in the filters F110W, F139M, and F160W. We have estimated extinctions for individual sources in these data from their colors and have dereddened their photometry accordingly. By comparing an area of the images that contains the richest concentration of previously identified W3 members to an area that has few members and is dominated by background stars, we have estimated the luminosity function for members of W3 with masses of 0.03\u20130.4\u00a0$M_\\odot$. That luminosity function closely resembles data in typical nearby star-forming regions that have much smaller stellar populations than W3 ($\\lesssim$500 vs. several thousand objects). Thus, we do not find evidence of significant variations in the initial mass function of low-mass stars and brown dwarfs with star-forming conditions, which is consistent with recent studies of other distant massive star-forming regions.'\nauthor:\n- 'M. J. Huston'\n- 'K. L. Luhman'\nbibliography:\n- 'ms.bib'\ntitle: 'The Initial Mass Function of Low-mass" +"---\nabstract: |\n In the past decades, great progress has been made in the field of optical and particle based measurement techniques for experimental analysis of fluid flows. Particle Image Velocimetry (PIV) technique is widely used to identify flow parameters from time-consecutive snapshots of particles injected into the fluid. The computation is performed as post-processing of the experimental data via proximity measure between particles in frames of reference.\n\n However, the post-processing step becomes problematic as the motility and density of the particles increases, since the data emerges in extreme rates and volumes. Moreover, existing algorithms for PIV either provide sparse estimations of the flow or require large computational time frame preventing from on-line use.\n\n The goal of this manuscript is therefore to develop an accurate on-line algorithm for estimation of fine-grained velocity field from PIV data. As the data constitutes a pair of images, we employ computer vision methods to solve the problem.\n\n In this work we introduce a convolutional neural network adapted to the problem, namely Volumetric Correspondence Network (VCN) which was recently proposed for the end-to-end optical flow estimation in computer vision. The network is thoroughly trained and tested on a dataset containing both synthetic and real flow" +"---\nabstract: 'As simulating complex biological processes become more important for modern medicine, new ways to compute this increasingly challenging data are necessary. In this paper, one of the most extensive volunteer-based distributed computing systems, called folding@home, is analyzed, and a trust-based approach is developed based upon it. Afterward, all advantages and disadvantages are presented. This approach uses trusted communities that are a subset of all available clients where they trust each other. Using such TCs, the system becomes more organic and responds better to malicious or malfunctioning clients.'\nauthor:\n- \nbibliography:\n- 'references.bib'\ntitle: 'A Trust-Based Approach for Volunteer-Based Distributed Computing in the Context of Biological Simulation'\n---\n\nGrid Computing, distributed, trust, reputation, folding@home, Computational Trust, Trusted-Desktop-Grid, Trust Communities\n\nIntroduction\n============\n\nAt the time of writing, the world is dominated by a worldwide pandemic called COVID-19. Developing a vaccine against it is one of the most important possibilities to fight this virus. Nevertheless, time is rare, as the pandemic already caused more than 2.15 million deaths[@Worldmeter2021] in the last 12 months. To speed up this process, vast processing power is needed to simulate the folding of the virus proteins. This simulated folding process helps scientists in finding new possibilities" +"---\nabstract: 'Guessing games are a prototypical instance of the \u201clearning by interacting\u201d paradigm. This work investigates how well an artificial agent can benefit from playing guessing games when later asked to perform on novel NLP downstream tasks such as Visual Question Answering (VQA). We propose two ways to exploit playing guessing games: 1) a supervised learning scenario in which the agent learns to mimic successful guessing games and 2) a novel way for an agent to play by itself, called Self-play via Iterated Experience Learning (SPIEL). We evaluate the ability of both procedures to generalise: an in-domain evaluation shows an increased accuracy ($+7.79$) compared with competitors on the evaluation suite CompGuessWhat?!; a transfer evaluation shows improved performance for VQA on the TDIUC dataset in terms of harmonic average accuracy ($+5.31$) thanks to more fine-grained object representations learned via SPIEL.'\nauthor:\n- |\n Alessandro Suglia$^1$, Yonatan Bisk$^2$, Ioannis Konstas$^1$, Antonio Vergari$^3$,\\\n **Emanuele Bastianelli**$^1$, **Andrea Vanzo**$^1$,\n- |\n **Oliver Lemon$^1$**\\\n $^1$Heriot-Watt University, Edinburgh, UK\\\n $^2$Carnegie Mellon University, Pittsburgh, USA\\\n $^3$University of California, Los Angeles, USA\\\n $^1$`{as247,i.konstas,e.bastianelli,a.vanzo,o.lemon}@hw.ac.uk`\\\n $^2$`ybisk@cs.cmu.edu`, $^3$`aver@cs.ucla.edu`\\\nbibliography:\n- 'eacl2021.bib'\ntitle: An Empirical Study on the Generalization Power of Neural Representations Learned via Visual Guessing Games\n---\n\nBackground & Related" +"---\nabstract: 'We investigate instabilities of the magnetic ground state in ferromagnetic metals that are induced by uniform electrical currents, and, in particular, go beyond previous analyses by including dipolar interactions. These instabilities arise from spin-transfer torques that lead to Doppler shifted spin waves. For sufficiently large electrical currents, spin-wave excitations have negative energy with respect to the uniform magnetic ground state, while remaining dynamically stable due to dissipative spin-transfer torques. Hence, the uniform magnetic ground state is energetically unstable, but is not able to dynamically reach the new ground state. We estimate this to happen for current densities $ j\\gtrsim (1-D/D_c)10^{13} \\mathrm{A/m^2} $ in typical thin film experiments, with $ D $ the Dzyaloshinskii-Moriya interaction constant, and $ D_c $ the Dzyaloshinskii-Moriya interaction that is required for spontaneous formation of spirals or skyrmions. These current densities can be made arbitrarily small for ultrathin film thicknesses at the order of nanometers, due to surface- and interlayer effects. From an analogue gravity perspective, the stable negative energy states are an essential ingredient to implement event horizons for magnons \u2013 the quanta of spin waves \u2013 giving rise to e.g. Hawking radiation and can be used to significantly amplify spin waves in" +"---\nabstract: 'The [*Kepler*]{}mission has provided a wealth of data, revealing new insights in time-domain astronomy. However, [*Kepler*]{}\u2019s single band-pass has limited studies to a single wavelength. In this work we build a data-driven, pixel-level model for the Pixel Response Function (PRF) of [*Kepler*]{}targets, modeling the image data from the spacecraft. Our model is sufficiently flexible to capture known detector effects, such as non-linearity, intra-pixel sensitivity variations, and focus change. In theory, the shape of the [*Kepler*]{}PRF should also be weakly wavelength dependent, due to optical chromatic aberration and wavelength dependent detector response functions. We are able to identify these predicted shape changes to the PRF using the residuals between [*Kepler*]{}data and our model. In this work, we show that these PRF changes correspond to wavelength variability in [*Kepler*]{}targets using a small sample of eclipsing binaries. Using our model, we demonstrate that pixel-level light curves of eclipsing binaries show variable eclipse depths, ellipsoidal modulation and limb darkening. These changes at the pixel level are consistent with multi-wavelength photometry. Our work suggests each pixel in the Kepler data of a single target has a different effective wavelength, ranging from $\\approx$ 550-750 $nm$. In this proof of concept, we demonstrate our model," +"---\nabstract: 'We perform a comparative spectro-temporal analysis on the variability classes of GRS 1915+105 and IGR J17091-3624 to draw inferences regarding the underlying accretion flow mechanism. The $\\nu$, as well as C2 class *Rossi X-Ray Timing Explorer* observation, have been considered for analysis. We investigate the intensity variation of the source in different energy domains that correspond to different components of the accretion flow and infer the relative dominance of these flow components during the dip/flare events. We correlate the dependence of the dynamic photon index ($\\Theta$) with intensities in different energy bands and comment on the transition of the source to hard/soft phases during soft dips/flares. We also report the presence of sharp QPOs at $\\sim 7.1$ Hz corresponding to both softer and harder domain in the case of $\\nu$ variability class of GRS 1915+105 and discuss the possible accretion flow configuration it suggests. Sharp QPO around $\\sim 20$ mHz is observed in $\\nu$ and C2 classes of IGR J17091-3624 in low and mid energy band (2.0-6.0 keV and 6.0-15.0 keV), but remains undetected in high energy (15.0-60.0 keV). The 2.5-25.0 keV background-subtracted spectra have also been fitted with TCAF along with a Compton reflection component. A plausible" +"---\nabstract: 'Police departments around the world have been experimenting with forms of place-based data-driven proactive policing for over two decades. Modern incarnations of such systems are commonly known as hot spot predictive policing. These systems predict where future crime is likely to concentrate such that police can allocate patrols to these areas and deter crime before it occurs. Previous research on fairness in predictive policing has concentrated on the feedback loops which occur when models are trained on discovered crime data, but has limited implications for models trained on victim crime reporting data. We demonstrate how differential victim crime reporting rates across geographical areas can lead to outcome disparities in common crime hot spot prediction models. Our analysis is based on a simulation[^1] patterned after district-level victimization and crime reporting survey data for Bogot\u00e1, Colombia. Our results suggest that differential crime reporting rates can lead to a displacement of predicted hotspots from high crime but low reporting areas to high or medium crime and high reporting areas. This may lead to misallocations both in the form of over-policing and under-policing.'\nauthor:\n- 'Nil-Jana Akpinar'\n- 'Maria De-Arteaga'\n- Alexandra Chouldechova\nbibliography:\n- 'literature.bib'\ntitle: The effect of differential victim" +"---\nabstract: 'We give a new proof of the fact that finite bipartite graphs cannot be axiomatized by finitely many first-order sentences among *finite* graphs. (This fact is a consequence of a general theorem proved by L.\u00a0Ham and M.\u00a0Jackson, and the counterpart of this fact for all bipartite graphs in the class of *all* graphs is a well-known consequence of the compactness theorem.) Also, to exemplify that our method is applicable in various fields of mathematics, we prove that neither finite simple groups, nor the ordered sets of join-irreducible congruences of slim semimodular lattices can be described by finitely many axioms in the class of *finite* structures. Since a 2007 result of G.\u00a0Gr\u00e4tzer and E.\u00a0Knapp, slim semimodular lattices have constituted the most intensively studied part of lattice theory and they have already led to results even in group theory and geometry. In addition to the non-axiomatizability results mentioned above, we present a new property, called Decomposable Cyclic Elements Property, of the congruence lattices of slim semimodular lattices.'\naddress: 'University of Szeged, Bolyai Institute. Szeged, Aradi v\u00e9rtan\u00fak tere 1, HUNGARY 6720'\nauthor:\n- G\u00e1bor Cz\u00e9dli\ndate: '[March 30, 2021 (for arXiv)]{}'\ntitle: 'Cyclic congruences of slim semimodular" +"---\nabstract: 'In this work we undertake a thorough study of the non-asymptotic properties of the vanilla generative adversarial networks (GANs). We prove a sharp oracle inequality for the Jensen-Shannon (JS) divergence between the underlying density $\\pstar$ and the GAN estimate. We also study the rates of convergence in the context of nonparametric density estimation. In particular, we show that the JS-divergence between the GAN estimate and $\\pstar$ decays as fast as $(\\log{n}/n)^{2\\beta/(2\\beta+d)}$ where $n$ is the sample size and $\\beta$ determines the smoothness of $\\pstar$. To the best of our knowledge, this is the first result in the literature on density estimation using vanilla GANs with JS convergence rates faster than $n^{-1/2}$ in the regime $\\beta > d/2$. Moreover, we show that the obtained rate is minimax optimal (up to logarithmic factors) for the considered class of densities.'\nauthor:\n- |\n Denis Belomestny denis.belomestny@uni-due.de\\\n Duisburg-Essen University, Germany Eric Moulines eric.moulines@polytechnique.edu\\\n Ecole Polytechnique, France Alexey Naumov anaumov@hse.ru\\\n HSE University, Russian Federation Nikita Puchkin npuchkin@hse.ru\\\n HSE University and IITP RAS, Russian Federation Sergey Samsonov svsamsonov@hse.ru\\\n HSE University, Russian Federation\nbibliography:\n- 'bibliography.bib'\ntitle: |\n Rates of convergence for density estimation\\\n with generative adversarial networks\n---\n\ngenerative model, oracle inequality, Jensen-Shannon risk," +"---\nabstract: 'Persuasion is an important and yet complex aspect of human intelligence. When undertaken through dialogue, the deployment of good arguments, and therefore counterarguments, clearly has a significant effect on the ability to be successful in persuasion. Two key dimensions for determining whether an argument is \u201cgood\u201d in a particular dialogue are the degree to which the intended audience believes the argument and counterarguments, and the impact that the argument has on the concerns of the intended audience. In this paper, we present a framework for modelling persuadees in terms of their beliefs and concerns, and for harnessing these models in optimizing the choice of move in persuasion dialogues. Our approach is based on the Monte Carlo Tree Search which allows optimization in real-time. We provide empirical results of a study with human participants showing that our automated persuasion system based on this technology is superior to a baseline system that does not take the beliefs and concerns into account in its strategy.'\nauthor:\n- Emmanuel Hadoux\n- Anthony Hunter\n- Sylwia Polberg\nbibliography:\n- 'references.bib'\ntitle: |\n Strategic Argumentation Dialogues for Persuasion: Framework and Experiments Based\\\n on Modelling the Beliefs and Concerns of the Persuadee\n---\n\nIntroduction\n============" +"---\nabstract: |\n We present counterfactual planning as a design approach for creating a range of safety mechanisms that can be applied in hypothetical future AI systems which have Artificial General Intelligence.\n\n The key step in counterfactual planning is to use an AGI machine learning system to construct a counterfactual world model, designed to be different from the real world the system is in. A counterfactual planning agent determines the action that best maximizes expected utility in this counterfactual planning world, and then performs the same action in the real world.\n\n We use counterfactual planning to construct an AGI agent emergency stop button, and a safety interlock that will automatically stop the agent before it undergoes an intelligence explosion. We also construct an agent with an input terminal that can be used by humans to iteratively improve the agent\u2019s reward function, where the incentive for the agent to manipulate this improvement process is suppressed. As an example of counterfactual planning in a non-agent AGI system, we construct a counterfactual oracle.\n\n As a design approach, counterfactual planning is built around the use of a graphical notation for defining mathematical counterfactuals. This two-diagram notation also provides a compact and readable language for" +"---\naddress: \nauthor:\n- Yoichi Takeda\ntitle: Rubidium abundances of galactic disk stars\n---\n\nIntroduction\n============\n\nRubidium (Rb, $Z = 37$) is a neutron-capture element (in which both s- and r-processes are involved), whose abundances spectroscopically determined for various types of stars may provide us with useful information regarding mixing of nuclear-process products or galactic chemical evolution. However, published spectroscopic studies of stellar Rb abundances have been rather limited in number, which may presumably reflect the comparatively large technical difficulty of its abundance determination (i.e., weak and blended line feature has to be dealt in many cases).\n\nThose investigations on Rb abundances so far have focused mainly on metal-poor stars. Following Gratton & Sneden (1994) who determined the abundances of Rb (along with other neutron-rich elements) for 19 stars at $-2.8 <$\u00a0\\[Fe/H\\]\u00a0$< 0$, Tomkin & Lambert (1999) conducted a more detailed Rb abundance analysis for 44 metal-deficient giants and dwarfs in the metallicity range of $-2.0<$\u00a0\\[Fe/H\\]\u00a0$< 0.0$. These previous studies revealed that the \\[Rb/Fe\\] ratio tends to be moderately supersolar ($0 \\lesssim$\u00a0\\[Rb/Fe\\]\u00a0$\\lesssim 0.5$) at the metal-poor regime (\\[Fe/H\\]\u00a0$\\lesssim -1$). Successively, Rb abundances of globular cluster giants (Yong et al. 2006, 2008; D\u2019Orazi et" +"---\nabstract: 'When studying the expressive power of neural networks, a main challenge is to understand how the size and depth of the network affect its ability to approximate real functions. However, not all functions are interesting from a practical viewpoint: functions of interest usually have a polynomially-bounded Lipschitz constant, and can be computed efficiently. We call functions that satisfy these conditions \u201cbenign\", and explore the benefits of size and depth for approximation of benign functions with ReLU networks. As we show, this problem is more challenging than the corresponding problem for non-benign functions. We give complexity-theoretic barriers to showing depth-lower-bounds: Proving existence of a benign function that cannot be approximated by polynomial-sized networks of depth $4$ would settle longstanding open problems in computational complexity. It implies that beyond depth $4$ there is a barrier to showing depth-separation for benign functions, even between networks of constant depth and networks of nonconstant depth. We also study size-separation, namely, whether there are benign functions that can be approximated with networks of size ${{\\cal O}}(s(d))$, but not with networks of size ${{\\cal O}}(s''(d))$. We show a complexity-theoretic barrier to proving such results beyond size ${{\\cal O}}(d\\log^2(d))$, but also show an explicit benign function," +"---\nabstract: 'Reconstructing seeing images from fMRI recordings is an absorbing research area in neuroscience and provides a potential brain-reading technology. The challenge lies in that visual encoding in brain is highly complex and not fully revealed. Inspired by the theory that visual features are hierarchically represented in cortex, we propose to break the complex visual signals into multi-level components and decode each component separately. Specifically, we decode shape and semantic representations from the lower and higher visual cortex respectively, and merge the shape and semantic information to images by a generative adversarial network (Shape-Semantic GAN). This \u2019divide and conquer\u2019 strategy captures visual information more accurately. Experiments demonstrate that Shape-Semantic GAN improves the reconstruction similarity and image quality, and achieves the state-of-the-art image reconstruction performance.'\nauthor:\n- |\n Tao Fang$^{1}$, Yu Qi$^{1,*}$, Gang Pan$^{2,1,3}$[^1]\\\n `duolafang@zju.edu.cn, qiyu@zju.edu.cn, gpan@zju.edu.cn`\\\n $^1$ College of Computer Science and Technology, Zhejiang University\\\n $^2$ State Key Lab of CAD&CG, Zhejiang University\\\n $^3$ The First Affiliated Hospital, College of Medicine, Zhejiang University\nbibliography:\n- 'cited.bib'\ntitle: 'Reconstructing Perceptive Images from Brain Activity by Shape-Semantic GAN'\n---\n\nIntroduction\n============\n\nDecoding visual information and reconstructing stimulus images from brain activities is a meaningful and attractive task in neural decoding. The" +"---\nabstract: |\n We develop techniques that lay out a basis for generalizations of the famous Thurston\u2019s Topological Characterization of Rational Functions for an *infinite* set of marked points and branched coverings of infinite degree. Analogously to the classical theorem we consider the Thurston\u2019s $\\sigma$-map acting on a [Teichm\u00fcller]{}\u00a0space which is this time infinite-dimensional \u2014 and this leads to a completely different theory comparing to the classical setting.\n\n We demonstrate our techniques by giving an alternative proof of the result by Markus F\u00f6rster about the classification of exponential functions with the escaping singular value.\nauthor:\n- Konstantin Bogdanov\ntitle: |\n Infinite-dimensional Thurston theory\\\n and transcendental dynamics I:\\\n infinite-legged spiders\n---\n\nIntroduction\n============\n\nThis is the first article out of four, prepared in order to publish the results of author\u2019s doctoral thesis. In the second and third papers we use the toolbox of the infinite-dimensional Thurston theory developed here to classify the transcendental entire functions that are compositions of a polynomial and the exponential for which their singular values escape on disjoint dynamic rays (the most general mode of escape). In the fourth article we investigate continuity of such families of functions with respect to potentials and external addresses and" +"---\nabstract: 'This paper presents a new derivation method of converse bounds on the non-asymptotic achievable rate of discrete weakly symmetric memoryless channels. It is based on the finite blocklength statistics of the channel, where with the use of an auxiliary channel the converse bound is produced. This method is general and initially is presented for an arbitrary weakly symmetric channel. Afterwards, the main result is specialized for the $q$-ary erasure channel (QEC), binary symmetric channel (BSC), and QEC with stop feedback. Numerical evaluations show identical or comparable bounds to the state-of-the-art in the cases of QEC and BSC, and a tighter bound for the QEC with stop feedback.'\nauthor:\n- '\\'\nbibliography:\n- 'bound\\_article.bib'\ntitle: 'Non-Asymptotic Converse Bounds Via Auxiliary Channels'\n---\n\nConverse bounds, achievability bounds, finite blocklength regime, non-asymptotic analysis, channel capacity, discrete memoryless channels.\n\nIntroduction\n============\n\nAn important step towards latency mitigation is the study of achievable channel coding rates for finite blocklengths. Shannon proved in [@shannon1948mathematical] that there exists a code that can achieve channel capacity as the blocklength grows to infinity. This is the main reason the majority of conventional communications systems utilize blocks of several thousands of symbols to transmit with a rate approximately" +"---\nabstract: 'In this paper, we define the term \u2019DigitalExposome\u2019 as a conceptual framework that takes us closer towards understanding the relationship between environment, personal characteristics, behaviour and wellbeing using multimodel mobile sensing technology. Specifically, we simultaneously collected (for the first time) multi-sensor data including urban environmental factors (e.g. air pollution including: [*PM1, PM2.5, PM10, Oxidised, Reduced, NH3 and Noise*]{}, People Count in the vicinity), body reaction (physiological reactions including: [*EDA, HR, HRV*]{}, Body Temperature, [*BVP*]{} and movement) and individuals\u2019 perceived responses (e.g. self-reported valence) in urban settings. Our users followed a pre-specified urban path and collected the data using a comprehensive sensing edge devices. The data is instantly fused, time-stamped and geo-tagged at the point of collection. A range of multivariate statistical analysis techniques have been applied including Principle Component Analysis, Regression and Spatial Visualisations to unravel the relationship between the variables. Results showed that [*EDA*]{} and Heart Rate Variability [*HRV*]{} are noticeably impacted by the level of Particulate Matter ([*PM*]{}) in the environment well with the environmental variables. Furthermore, we adopted Deep Belief Network to extract features from the multimodel data feed which outperformed Convolutional Neural Network and achieved up to ([*a=80.8% , [$\\sigma$]{}=0.001*]{}) accuracy.'\nauthor:\n- Thomas" +"---\nabstract: 'Federated Learning (FL) provides both model performance and data privacy for machine learning tasks where samples or features are distributed among different parties. In the training process of FL, no party has a global view of data distributions or model architectures of other parties. Thus the manually-designed architectures may not be optimal. In the past, Neural Architecture Search (NAS) has been applied to FL to address this critical issue. However, existing Federated NAS approaches require prohibitive communication and computation effort, as well as the availability of high-quality labels. In this work, we present Self-supervised Vertical Federated Neural Architecture Search (SS-VFNAS) for automating FL where participants hold feature-partitioned data, a common cross-silo scenario called Vertical Federated Learning (VFL). In the proposed framework, each party first conducts NAS using self-supervised approach to find a local optimal architecture with its own data. Then, parties collaboratively improve the local optimal architecture in a VFL framework with supervision. We demonstrate experimentally that our approach has superior performance, communication efficiency and privacy compared to Federated NAS and is capable of generating high-performance and highly-transferable heterogeneous architectures even with insufficient overlapping samples, providing automation for those parties without deep learning expertise.[^1]'\nauthor:\n- 'Xinle\u00a0Liang," +"---\nabstract: 'Retinal degenerative diseases cause profound visual impairment in more than 10 million people worldwide, and retinal prostheses are being developed to restore vision to these individuals. Analogous to cochlear implants, these devices electrically stimulate surviving retinal cells to evoke visual percepts (phosphenes). However, the quality of current prosthetic vision is still rudimentary. Rather than aiming to restore \u201cnatural\u201d vision, there is potential merit in borrowing state-of-the-art computer vision algorithms as image processing techniques to maximize the usefulness of prosthetic vision. Here we combine deep learning\u2013based scene simplification strategies with a psychophysically validated computational model of the retina to generate realistic predictions of simulated prosthetic vision, and measure their ability to support scene understanding of sighted subjects (virtual patients) in a variety of outdoor scenarios. We show that object segmentation may better support scene understanding than models based on visual saliency and monocular depth estimation. In addition, we highlight the importance of basing theoretical predictions on biologically realistic models of phosphene shape. Overall, this work has the potential to drastically improve the utility of prosthetic vision for people blinded from retinal degenerative diseases.'\nauthor:\n- Nicole Han\n- Sudhanshu Srivastava\n- Aiwen Xu\n- Devi Klein\n- Michael Beyeler" +"---\nabstract: 'We review recent experimental results on the metal-insulator transition and low-density phases in strongly-interacting, low-disordered silicon-based two-dimensional electron systems. Special attention is given to the metallic state in ultra-clean SiGe quantum wells and to the evidence for a flat band at the Fermi level and a quantum electron solid.'\naddress:\n- 'Institute of Solid State Physics, Chernogolovka, Moscow District 142432, Russia'\n- 'Physics Department, Northeastern University, Boston, Massachusetts 02115, USA'\nauthor:\n- 'A.\u00a0A. Shashkin'\n- 'S.\u00a0V. Kravchenko'\ntitle: 'Metal-insulator transition and low-density phases in a strongly-interacting two-dimensional electron system'\n---\n\nTwo-dimensional electron systems ,strongly correlated electrons ,spin-polarized electron system ,flat bands ,Wigner crystallization 71.30.+h ,73.40.Qv\n\nIntroduction\n============\n\nThe metal-insulator transition (MIT) is an exceptional testing ground for studying strong electron-electron correlations in two dimensions (2D) in the presence of disorder. The existence of the metallic state and the MIT in strongly interacting 2D electron systems (contrary to the famous conclusion by the \u201cGang of Four\u201d that only an insulating state is possible in non-interacting 2D systems [@abrahams1979scaling]) was predicted in Refs.\u00a0[@finkelstein1983influence; @finkelstein1984weak; @castellani1984interaction]. The phenomenon was experimentally discovered in silicon metal-oxide-semiconductor field-effect transistors (MOSFETs) and subsequently observed in a wide variety of other strongly-interacting 2D" +"---\nabstract: 'The optimal dynamic treatment rule (ODTR) framework offers an approach for understanding which kinds of patients respond best to specific treatments \u2013 in other words, treatment effect heterogeneity. Recently, there has been a proliferation of methods for estimating the ODTR. One such method is an extension of the SuperLearner algorithm \u2013 an ensemble method to optimally combine candidate algorithms extensively used in prediction problems \u2013 to ODTRs. Following the \u201ccausal roadmap,\" we causally and statistically define the ODTR and provide an introduction to estimating it using the ODTR SuperLearner. Additionally, we highlight practical choices when implementing the algorithm, including choice of candidate algorithms, metalearners to combine the candidates, and risk functions to select the best combination of algorithms. Using simulations, we illustrate how estimating the ODTR using this SuperLearner approach can uncover treatment effect heterogeneity more effectively than traditional approaches based on fitting a parametric regression of the outcome on the treatment, covariates and treatment-covariate interactions. We investigate the implications of choices in implementing an ODTR SuperLearner at various sample sizes. Our results show the advantages of: (1) including a combination of both flexible machine learning algorithms and simple parametric estimators in the library of candidate algorithms; (2)" +"---\nabstract: |\n When users on social media share content without considering its veracity, they may unwittingly be spreading misinformation. In this work, we investigate the design of lightweight interventions that nudge users to assess the accuracy of information as they share it. Such assessment may deter users from posting misinformation in the first place, and their assessments may also provide useful guidance to friends aiming to assess those posts themselves.\n\n In support of lightweight assessment, we first develop a taxonomy of the reasons why people believe a news claim is or is not true; this taxonomy yields a checklist that can be used at posting time. We conduct evaluations to demonstrate that the checklist is an accurate and comprehensive encapsulation of people\u2019s free-response rationales.\n\n In a second experiment, we study the effects of three behavioral nudges\u20141) checkboxes indicating whether headings are accurate, 2) tagging reasons (from our taxonomy) that a post is accurate via a checklist and 3) providing free-text rationales for why a headline is or is not accurate\u2014on people\u2019s intention of sharing the headline on social media. From an experiment with 1668 participants, we find that both providing accuracy assessment and rationale reduce the sharing of false" +"---\nabstract: 'A key challenge in scaling Gaussian Process (GP) regression to massive datasets is that exact inference requires computation with a dense $n \\times n$ kernel matrix, where $n$ is the number of data points. Significant work focuses on approximating the kernel matrix via interpolation using a smaller set of $m$ \u201cinducing points\u201d. *Structured kernel interpolation* (SKI) is among the most scalable methods: by placing inducing points on a dense grid and using structured matrix algebra, SKI achieves per-iteration time of $\\mathcal{O}(n + m \\log m)$ for approximate inference. This linear scaling in $n$ enables inference for very large data sets; however the cost is *per-iteration*, which remains a limitation for extremely large $n$. We show that the SKI per-iteration time can be reduced to $\\mathcal{O}(m \\log m)$ after a single $\\mathcal{O}(n)$ time precomputation step by reframing SKI as solving a natural Bayesian linear regression problem with a fixed set of $m$ compact basis functions. Our code is available at .'\nauthor:\n- |\n Mohit Yadav, Daniel Sheldon, Cameron Musco\\\n \\\nbibliography:\n- 'main.bib'\ntitle: Faster Kernel Interpolation for Gaussian Processes\n---\n\nIntroduction {#sec:introduction}\n============\n\nGPs are a widely used and principled class of methods for predictive modeling. They" +"---\nabstract: 'The Parisi scheme for equilibrium and the corresponding slow dynamics with multithermalization - same temperature common to all observables, different temperatures only possible at widely separated timescales \u2013 imply one another. Consistency requires that two systems brought into infinitesimal coupling be able to rearrange their timescales in order that all their temperatures match: this time reorganisation is only possible because the systems have a set of time-reparametrization invariances, that are thus seen to be an essential component of the scenario.'\nauthor:\n- Jorge Kurchan\nbibliography:\n- 'resubmit1.bib'\ntitle: 'Time-reparametrization invariances, multithermalization and the Parisi scheme'\n---\n\n\u00a0\\\n\u00a0\\\n\nIntroduction\n============\n\nA finite dimensional system whose equilibrium solution follows the Parisi scheme [@MezardParisiVirasoro] will take an infinite time to reach this equilibrium starting form random configuration. It may also be driven into an out of equilibrium steady-state by an infinitesimal drive, such as shear [@thalmann2001aging; @berthier2000two], or time-dependence of disorder [@horner1992dynamics]. If the relaxation times are long, or, in a steady-state, if the drive is weak, the dynamics are slow: this is the regime we are interested in. The idea of this paper is composed of two parts:\n\n$\\bullet$ The out of equilibrium dynamics under these circumstances is a" +"---\nabstract: |\n In this paper, we provide a review on the kernel method, which is one of the options for characterizing so-called exact tail asymptotic properties in stationary probabilities of two-dimensional random walks, discrete or continuous (or mixed), in the quarter plane. Many two-dimensional queueing systems can be modelled via these types of random walks. Stationary probabilities are one of the most sought statistical quantities in queueing analysis. However, explicit expressions are available only for a very limited number of models. Therefore, tail asymptotic properties become more important, since they provide insightful information into the structure of the tail probabilities, and often lead to approximations, performance bounds, algorithms, among possible others.\n\n Characterizing tail asymptotics for random walks in the quarter plane is a fundamental and also classical problem. Classical approaches are usually based on a complete determination of the transformation for the unknown probabilities of interest, for example, a singular integral presentation for the unknown probability generating function through boundary value problems [@FKM:82; @Guillemin-Leeuwaarden:09]. In contrast to classical approaches (approaches based on the solution for the unknown probabilities or the transform of the unknow probabilities), the kernel method, reviewed here, is very efficient for two-dimensional problems, which only requires" +"---\nabstract: 'This paper introduces the notion of quantitative resilience of a control system. Following prior work, we study systems enduring a loss of control authority over some of their actuators. Such a malfunction results in actuators producing possibly undesirable inputs over which the controller has real-time readings but no control. By definition, a system is resilient if it can still reach a target after a partial loss of control authority. However, after a malfunction, a resilient system might be significantly slower to reach a target compared to its initial capabilities. We quantify this loss of performance through the new concept of quantitative resilience. We define such a metric as the maximal ratio of the minimal times required to reach any target for the initial and malfunctioning systems. Naive computation of quantitative resilience directly from the definition is a complex task as it requires solving four nested, possibly nonlinear, optimization problems. The main technical contribution of this work is to provide an efficient method to compute quantitative resilience. Relying on control theory and on two novel geometric results we reduce the computation of quantitative resilience to a single linear optimization problem. We illustrate our method on two numerical examples: an" +"---\nabstract: 'We introduce tf\\_geometric[^1], an efficient and friendly library for graph deep learning, which is compatible with both TensorFlow 1.x and 2.x. tf\\_geometric provides kernel libraries for building Graph Neural Networks (GNNs) as well as implementations of popular GNNs. The kernel libraries consist of infrastructures for building efficient GNNs, including graph data structures, graph map-reduce framework, graph mini-batch strategy, etc. These infrastructures enable tf\\_geometric to support single-graph computation, multi-graph computation, graph mini-batch, distributed training, etc.; therefore, tf\\_geometric can be used for a variety of graph deep learning tasks, such as transductive node classification, inductive node classification, link prediction, and graph classification. Based on the kernel libraries, tf\\_geometric implements a variety of popular GNN models for different tasks. To facilitate the implementation of GNNs, tf\\_geometric also provides some other libraries for dataset management, graph sampling, etc. Different from existing popular GNN libraries, tf\\_geometric provides not only Object-Oriented Programming (OOP) APIs, but also Functional APIs, which enable tf\\_geometric to handle advanced graph deep learning tasks such as graph meta-learning. The APIs of tf\\_geometric are friendly, and they are suitable for both beginners and experts. In this paper, we first present an overview of tf\\_geometric\u2019s framework. Then, we conduct experiments on some" +"---\nabstract: 'In the last three decades, powerful computer-assisted techniques have been developed in order to validate a posteriori numerical solutions of semilinear elliptic problems of the form $\\Delta u +f(u,\\nabla u) = 0$. By studying a well chosen fixed point problem defined around the numerical solution, these techniques make it possible to prove the existence of a solution in an explicit (and usually small) neighborhood the numerical solution. In this work, we develop a similar approach for a broader class of systems, including nonlinear diffusion terms of the form $\\Delta \\Phi(u)$. In particular, this enables us to obtain new results about steady states of a cross-diffusion system from population dynamics: the (non-triangular) SKT model. We also revisit the idea of automatic differentiation in the context of computer-assisted proof, and propose an alternative approach based on differential-algebraic equations.'\nauthor:\n- 'Maxime Breden[^1]'\nbibliography:\n- 'bibfile.bib'\ntitle: 'Computer-assisted proofs for some nonlinear diffusion problems'\n---\n\nIntroduction\n============\n\nContext\n-------\n\nDiffusion is a key mechanism in many spatially extended system coming from Physics, Chemistry or Biology. Starting from the prototypical mathematical model used to describe such phenomena that is the heat equation $$\\begin{aligned}\n\\partial_t u = \\Delta u,\\end{aligned}$$ many more general models" +"---\nabstract: 'A generalized method of alternating resolvents was introduced by Boikanyo and Moro[\u015f]{}anu as a way to approximate common zeros of two maximal monotone operators. In this paper we analyse the strong convergence of this algorithm under two different sets of conditions. As a consequence we obtain effective rates of metastability (in the sense of Terence Tao) and quasi-rates of asymptotic regularity. Furthermore, we bypass the need for sequential weak compactness in the original proofs. Our quantitative results are obtained using proof-theoretical techniques in the context of the proof mining program.'\nauthor:\n- 'Bruno Dinis[^1]'\n- 'Pedro Pinto[^2]'\nbibliography:\n- 'References.bib'\ntitle: 'Effective metastability for a method of alternating resolvents [^3]'\n---\n\nIntroduction\n============\n\nIn this paper we analyse the strong convergence of a generalized method of alternating resolvents in Hilbert spaces, introduced by Boikanyo and Moro[\u015f]{}anu.\n\nLet $H$ be a Hilbert space and ${\\mathsf{A}}$ and ${\\mathsf{B}}$ be two maximal monotone operators. Motivated by the convex feasibility problem and the alternating projections method [@combettes1997hilbertian; @Deutsch1985], the *method of alternating resolvents* is recursively defined as follows: $x_0 \\in H$ and $$\\begin{cases}\nx_{2n+1} = J^{{\\mathsf{A}}}_{\\beta_n}(x_{2n})\\\\\nx_{2n+2} = J^{{\\mathsf{B}}}_{\\mu_n}(x_{2n+1})\n\\end{cases}$$ where $(\\beta_n),(\\mu_n)$ are sequences of positive real numbers. This method was shown" +"---\nauthor:\n- 'A. Lizunov[!!]{}'\ntitle: Charge exchange radiation diagnostic with gas jet target for measurement of plasma flow velocity in the linear magnetic trap\n---\n\nIntroduction {#sec:intro}\n============\n\nLinear magnetic systems for plasma confinement, also frequently referred as open-ended traps, have the common issue of field lines facing the grounded metallic wall somewhere beyond the mirror. The particular magnetic field configuration for different devices varies. In order for these confinement concepts to be attractive for real applications, the axial heat flux through the direct contact with the wall must be radically depressed comparing to the classic Spitzer [@spitzer] heat conductivity. The gas dynamic trap (GDT) [@gdt-review-ppcf] utilizes a strongly expanding magnetic \u201cfan\u201d beyond the mirror with a straight or curved inwards field line shape. The axial profile of the plasma electrostatic potential plays a crucial role forming the actual heat transport physics. In a steady state, this ambipolar potential equalises the electron and ion currents onto the wall. Study of the axial particle and energy transport\u00a0[@nf-axconf-2020] is one of the top priorities in the GDT scientific task list. This activity embraces new diagnostics development as well as experimental and theoretical research.\n\nLayout of the GDT device in the" +"---\nabstract: 'Can free agency be compatible with determinism? Compatibilists argue that the answer is yes, and it has been suggested that the computer science principle of \u201ccomputational irreducibility\u201d sheds light on this compatibility. It implies that there cannot in general be shortcuts to predict the behavior of agents, explaining why deterministic agents often appear to act freely. In this paper, we introduce a variant of computational irreducibility that intends to capture more accurately aspects of actual (as opposed to apparent) free agency: computational sourcehood, i.e.\u00a0the phenomenon that the successful prediction of a process\u2019 behavior must typically involve an almost-exact representation of the relevant features of that process, regardless of the time it takes to arrive at the prediction. We argue that this can be understood as saying that the process itself is the source of its actions, and we conjecture that many computational processes have this property. The main contribution of this paper is technical: we analyze whether and how a sensible formal definition of computational sourcehood is possible. While we do not answer the question completely, we show how it is related to finding a particular simulation preorder on Turing machines, we uncover concrete stumbling blocks towards" +"---\nabstract: 'Effective training of deep neural networks (DNNs) usually requires labeling a large dataset, which is time and labor intensive. Recently, various data augmentation strategies like regional dropout and mix strategies have been proposed, which are effective as the augmented dataset can guide the model to attend on less discriminative parts. However, these strategies operate only at the image level, where the objects and the background are coupled. Thus, the boundaries are not well augmented due to the fixed semantic scenario. In this paper, we propose ObjectAug to perform object-level augmentation for semantic image segmentation. Our method first decouples the image into individual objects and the background using semantic labels. Second, each object is augmented individually with commonly used augmentation methods (e.g., scaling, shifting, and rotation). Third, the pixel artifacts brought by object augmentation are further restored using image inpainting. Finally, the augmented objects and background are assembled as an augmented image. In this way, the boundaries can be fully explored in the various semantic scenarios. In addition, ObjectAug can support category-aware augmentation that gives various possibilities to objects in each category, and can be easily combined with existing image-level augmentation methods to further boost the performance. Comprehensive experiments" +"---\nabstract: 'We describe a new class of nonequilibrium quantum many-body phenomena in the form of networks of caustics that dominate the many-body wavefunction in the semiclassical regime following a sudden quench. It includes the light cone-like propagation of correlations as a particular case. Caustics are singularities formed by the birth and death of waves and form a hierarchy of universal patterns whose natural mathematical description is via catastrophe theory. Examples in classical waves range from rainbows and gravitational lensing in optics to tidal bores and rogue waves in hydrodynamics. Quantum many-body caustics are discretized by second-quantization (\u201cquantum catastrophes\u201d) and live in Fock space which can potentially have many dimensions. We illustrate these ideas using the Bose Hubbard dimer and trimer models which are simple enough that the caustic structure can be elucidated from first principles and yet run the full range from integrable to nonintegrable dynamics. The dimer gives rise to discretized versions of fold and cusp catastrophes whereas the trimer allows for higher catastrophes including the codimension-3 hyperbolic and elliptic umbilics which are organized by, and projections of, an 8-dimensional corank-2 catastrophe known as $X_9$. These results describe a hitherto unrecognized form of universality in quantum dynamics organized" +"---\nabstract: 'Impacts of observational systematic errors on the lensing analysis of the cosmic microwave background (CMB) polarization are investigated by numerical simulations. We model errors of gain, angle, and pointing in observation of the CMB polarization and simulate polarization fields modulated by the errors. We discuss the response of systematics-induced $B$-modes to amplitude and spatial scale of the imposed errors and show that the results of the lensing reconstruction and delensing analysis behave according to it. It is observed that error levels expected in the near future lead to no significant degradation in delensing efficiency.'\nauthor:\n- Ryo Nagata\n- Toshiya Namikawa\nbibliography:\n- 'refs.bib'\ntitle: A numerical study of observational systematic errors in lensing analysis of CMB polarization\n---\n\nIntroduction\n============\n\nDetection of the primordial gravitational waves (GWs) which originate from cosmic inflation is expected to provide us with an opportunity to understand a very early stage of the universe. An attempt to extract a signal of the primordial GWs from $B$-modes of the cosmic microwave background (CMB) polarization is one of the most promising ways to this end [@Polnarev:1985; @Kamionkowski:1996:GW]. The tightest bound to-date on the amplitude of the primordial GWs characterized by the tensor-to-scalar ratio, $r$," +"---\nabstract: 'Cleft lip and palate (CLP) refer to a congenital craniofacial condition that causes various speech-related disorders. As a result of structural and functional deformities, the affected subjects\u2019 speech intelligibility is significantly degraded, limiting the accessibility and usability of speech-controlled devices. Towards addressing this problem, it is desirable to improve the CLP speech intelligibility. Moreover, it would be useful during speech therapy. In this study, the cycle-consistent adversarial network (CycleGAN) method is exploited for improving CLP speech intelligibility. The model is trained on native Kannada-speaking childrens\u2019 speech data. The effectiveness of the proposed approach is also measured using automatic speech recognition performance. Further, subjective evaluation is performed, and those results also confirm the intelligibility improvement in the enhanced speech over the original.'\naddress: |\n $^1$Indian Institute of Technology Guwahati, Guwahati, India\\\n $^2$Indian Institute of Technology Dharwad, Dharwad, India\\\n $^3$National University of Singapore, Singapore\nbibliography:\n- 'vwl\\_enh.bib'\ntitle: |\n Enhancing the Intelligibility of Cleft Lip and Palate Speech using\\\n Cycle-consistent Adversarial Networks\n---\n\nCLP speech, intelligibility, CycleGAN, enhancement, speech disorder\n\nIntroduction\n============\n\nThe individuals with cleft lip and palate (CLP) suffer from speech disorders due to velopharyngeal dysfunction, oro-nasal fistula, and mislearning\u00a0[@kummer2013cleft]. As a result, children with CLP may" +"---\nabstract: 'This paper proposes a novel multiple-input multiple-output (MIMO) symbol detector that incorporates a deep reinforcement learning (DRL) agent into the Monte Carlo tree search (MCTS) detection algorithm. We first describe how the MCTS algorithm, used in many decision-making problems, is applied to the MIMO detection problem. Then, we introduce a self-designed deep reinforcement learning agent, consisting of a policy value network and a state value network, which is trained to detect MIMO symbols. The outputs of the trained networks are adopted into a modified MCTS detection algorithm to provide useful node statistics and facilitate enhanced tree search process. The resulted scheme, termed the DRL-MCTS detector, demonstrates significant improvements over the original MCTS detection algorithm and exhibits favorable performance compared to other existing linear and DNN-based detection methods under varying channel conditions.'\nauthor:\n- 'Tz-Wei Mo, Ronald Y. Chang, , and Te-Yi Kan [^1] [^2]'\nbibliography:\n- 'IEEEabrv.bib'\n- 'references.bib'\ntitle: Deep Reinforcement Learning Aided Monte Carlo Tree Search for MIMO Detection\n---\n\nMIMO detection, neural networks, deep reinforcement learning, Monte Carlo tree search.\n\nIntroduction\n============\n\nIt is evident that the usage of mobile communication continues to rise throughout the years. To match the increasing needs, newer generations of" +"---\nabstract: 'Fluorescence microscopy images contain several channels, each indicating a marker staining the sample. Since many different marker combinations are utilized in practice, it has been challenging to apply deep learning based segmentation models, which expect a predefined channel combination for all training samples as well as at inference for future application. Recent work circumvents this problem using a modality attention approach to be effective across any possible marker combination. However, for combinations that do not exist in a labeled training dataset, one cannot have any estimation of potential segmentation quality if that combination is encountered during inference. Without this, not only one lacks quality assurance but one also does not know where to put any additional imaging and labeling effort. We herein propose a method to estimate segmentation quality on unlabeled images by ($i$)\u00a0estimating both aleatoric and epistemic uncertainties of convolutional neural networks for image segmentation, and ($ii$)\u00a0training a Random Forest model for the interpretation of uncertainty features via regression to their corresponding segmentation metrics. Additionally, we demonstrate that including these uncertainty measures during training can provide an improvement on segmentation performance.'\naddress: |\n $^{1}$ Computer-assisted Applications in Medicine, ETH Zurich, Switzerland\\\n $^{2}$ Dept.\u00a0of Medical" +"---\nabstract: '*Ab initio* quantum Monte Carlo (QMC) methods in principle allow for the calculation of exact properties of correlated many-electron systems, but are in general limited to the simulation of a finite number of electrons $N$ in periodic boundary conditions. Therefore, an accurate theory of finite-size effects is indispensable to bridge the gap to realistic applications in the thermodynamic limit. In this work, we revisit the uniform electron gas (UEG) at finite temperature as it is relevant to contemporary research e.g. in the field of warm dense matter. In particular, we present a new scheme to eliminate finite-size effects both in the static structure factor $S(q)$ and in the interaction energy $v$, which is based on the density response formalism. We demonstrate that this method often allows to obtain $v$ in the TDL within a relative accuracy of $\\sim0.2\\%$ from as few as $N=4$ electrons without any empirical choices or knowledge of results for other values of $N$. Finally, we evaluate the applicability of our method upon increasing the density parameter $r_s$ and decreasing the temperature $T$.'\nauthor:\n- Tobias Dornheim\n- Jan Vorberger\nbibliography:\n- 'bibliography.bib'\ntitle: ' Overcoming finite-size effects in electronic structure simulations at extreme conditions" +"---\nabstract: 'The $b\\to s(d)$ quark-level transitions are flavor-changing neutral current processes, which are not allowed at tree level in the standard model. These processes are very rare and constitute a potential probe for new physics. Belle II at SuperKEKB is a substantial upgrade of the Belle experiment. It aims to collect 50 ab$^{-1}$ of data with a design peak luminosity of $8\\times 10^{35}$ cm$^{-2}$s$^{-1}$ that is 40 times more than its predecessor. It has been recording data since 2019 and during these early days of the experiment, efforts are being made to detect early signals of the above decays. We report the first reconstrution in Belle II data of a $B\\to K^{*}\\gamma$ signal as well as future prospects for radiative and electroweak decays at Belle II.'\nauthor:\n- |\n Soumen Halder\\\n (On behalf of the Belle II Collaboration)\ntitle: Results and prospects of radiative and electroweak penguin decays at Belle II\n---\n\nIntroduction\n============\n\nThe flavor-changing neutral current processes mediated by $b\\to s(d)$ transitions are forbidden at tree level in the standard model (SM). These processes can however proceed via higher-order amplitudes involving quantum loops. Non-SM particles may contribute in such loops as exemplified in Fig \u00a0\\[fig:intro\\], which could" +"---\nabstract: 'In this paper, we study the suitability of neuromorphic event-based vision cameras for spaceflight, and the effects of neutron radiation on their performance. Neuromorphic event-based vision cameras are novel sensors that implement asynchronous, clockless data acquisition, providing information about the change in illuminance greater than $(\\ge120dB)$ with sub-millisecond temporal precision. These sensors have huge potential for space applications as they provide an extremely sparse representation of visual dynamics while removing redundant information, thereby conforming to low-resource requirements. An event-based sensor was irradiated under wide-spectrum neutrons at Los Alamos Neutron Science Center and its effects were classified. Radiation-induced damage of the sensor under wide-spectrum neutrons was tested, as was the radiative effect on the signal-to-noise ratio of the output at different angles of incidence from the beam source. We found that the sensor had very fast recovery during radiation, showing high correlation of noise event bursts with respect to source macro-pulses. No statistically significant differences were observed between the number of events induced at different angles of incidence but significant differences were found in the spatial structure of noise events at different angles. The results show that event-based cameras are capable of functioning in a space-like, radiative environment with" +"---\nabstract: 'A wide variety of emission processes by electron wave packets with an orbital angular momentum $\\ell \\hbar$, called the vortex electrons, can be influenced by a nonparaxial contribution due to their intrinsic electric quadrupole moment. We study Smith-Purcell radiation from a conducting grating generated by a vortex electron, described as a generalized Laguerre-Gaussian packet, which has an intrinsic magnetic dipole moment and an electric quadrupole moment. By using a multipole expansion of the electromagnetic field of such an electron, we employ a generalized surface-current method, applicable for a wide range of parameters. The radiated energy contains contributions from the charge, from the magnetic moment, and from the electric quadrupole moment, as well as from their interference. The quadrupole contribution grows as the packet spreads while propagating, and it is enhanced for large $\\ell$. In contrast to the linear growth of the radiation intensity from the charge with a number of strips $N$, the quadrupole contribution reveals an $N^3$ dependence, which puts a limit on the maximal grating length for which the radiation losses stay small. We study spectral-angular distributions of the Smith-Purcell radiation both analytically and numerically and demonstrate that the electron\u2019s vorticity can give rise to detectable" +"---\nabstract: 'Transformer based knowledge tracing model is an extensively studied problem in the field of computer-aided education. By integrating temporal features into the encoder-decoder structure, transformers can processes the exercise information and student response information in a natural way. However, current state-of-the-art transformer-based variants still share two limitations. First, extremely long temporal features cannot well handled as the complexity of self-attention mechanism is $O(n^2)$. Second, existing approaches track the knowledge drifts under fixed a window size, without considering different temporal-ranges. To conquer these problems, we propose MUSE, which is equipped with multi-scale temporal sensor unit, that takes either local or global temporal features into consideration. The proposed model is capable to capture the dynamic changes in users\u2019 knowledge states at different temporal-ranges, and provides an efficient and powerful way to combine local and global features to make predictions. Our method won the 5-th place over 3,395 teams in the Riiid AIEd Challenge 2020.'\nauthor:\n- |\n Chengwei Zhang, Yangzhou Jiang, Wei Zhang, Chengyu Gu\\\n Shanghai Jiao Tong University\\\n {cwzhang, jiangyangzhou, gcy950912}@sjtu.edu.cn, mercurialzhang@gmail.com\nbibliography:\n- 'bibliography.bib'\ntitle: 'MUSE: Multi-Scale Temporal Features Evolution for Knowledge Tracing'\n---\n\nIntroduction\n============\n\nRecent COVID-19 has forced most countries to temporarily close schools and offline-education" +"---\nabstract: 'Non-native speakers show difficulties with spoken word processing. Many studies attribute these difficulties to imprecise phonological encoding of words in the lexical memory. We test an alternative hypothesis: that some of these difficulties can arise from the non-native speakers\u2019 phonetic perception. We train a computational model of phonetic learning, which has no access to phonology, on either one or two languages. We first show that the model exhibits predictable behaviors on phone-level and word-level discrimination tasks. We then test the model on a spoken word processing task, showing that phonology may not be necessary to explain some of the word processing effects observed in non-native speakers. We run an additional analysis of the model\u2019s lexical representation space, showing that the two training languages are not fully separated in that space, similarly to the languages of a bilingual human speaker.'\nauthor:\n- |\n Yevgen Matusevych\\\n School of Informatics\\\n University of Edinburgh\\\n `yevgen.matusevych@ed.ac.uk`\\\n Herman Kamper\\\n E&E Engineering\\\n Stellenbosch University\\\n `kamperh@sun.ac.za`\\\n Thomas Schatz\\\n Department of Linguistics & UMIACS\\\n University of Maryland\\\n `tschatz@umd.edu`\\\n Naomi H. Feldman\\\n Department of Linguistics & UMIACS\\\n University of Maryland\\\n `nhf@umd.edu`\\\n Sharon Goldwater\\\n School of Informatics\\\n University of Edinburgh\\\n `sgwater@inf.ed.ac.uk`\\\nbibliography:\n- 'references.bib'\ntitle: 'A phonetic model of" +"---\nabstract: 'Despite their outstanding mechanical properties, with many industrial applications, a rational and systematic design of new and controlled auxetic materials remains poorly developed. Here a unified framework is established to describe bidimensional perfect auxetics with potential use in the design of new materials. Perfect auxetics are characterized by a Poisson\u2019s ratio $\\nu=-1$ over a finite strain range and can be modeled as materials composed of rotating rigid units. Inspired by a natural connection between these rotating rigid units with an antiferromagnetic spin system, here are unveiled the conditions for the emergence of a non-trivial floppy mode responsible for the auxetic behavior. Furthermore, this model paves a simple pathway for the design of new auxetic materials, based on three simple steps, which set the sufficient connectivity and geometrical constraints for perfect auxetics. In particular, a new exotic crystal, a Penrose quasi-crystal and the long desired isotropic auxetic material are designed and constructed for the first time. Using 3D printed materials, finite element methods and this rigid unit model, the auxetic behavior of these designs is shown to be robust under small disturbances in the structure, though the Poisson\u2019s ratio value relies on system\u2019s details, approaching $-1$ close to the" +"---\nbibliography:\n- 'manuscript.bib'\n---\n\n0.25in\n\n[The\u00a0$\\operatorname{SL(2,\\mathbb{R})}$ Wess-Zumino-Novikov-Witten spin-chain $\\sigma$-model]{}\n\n0.25in\n\n[**Roberto Ruiz**]{} 0.1in\n\nDepartamento de F[\u00ed]{}sica Te[\u00f3]{}rica\\\nand\\\nInstituto de F[\u00ed]{}sica de Part[\u00ed]{}culas y del Cosmos (IPARCOS),\\\nUniversidad Complutense de Madrid,\\\n$28040$ Madrid, Spain\\\n\n.4in\n\n**Abstract**\n\n.1in\n\nThe\u00a0$\\operatorname{SL(2,\\mathbb{R})}$ Wess-Zumino-Novikov-Witten model realises bosonic-string theory in\u00a0$\\operatorname{AdS_{3}}$ with pure Neveu-Schwarz-Neveu-Schwarz flux. We construct an effective action in the semi-classical limit of the model, which corresponds to a\u00a0$\\operatorname{SL(2,\\mathbb{R})}$ spin-chain $\\sigma$-model. We adopt two complementary points of view. First, we consider the classical action. We identify fast and slow target-space coordinates. We impose a gauge-fixing condition to the former. By expanding the gauge-fixed action in an effective coupling, we obtain the effective action for the slow coordinates. Second, we consider the spin chain of the model. We postulate a set of coherent states to express a transition amplitude in the spin chain as a path integral. We observe that the temporal interval is discretised in terms of the step length of the spatial interval. This relationship implies that the Landau-Lifshitz limit of the spin chain involves both intervals. The limit yields a semi-classical path integral over coherent states, wherein we identify the effective action again.\n\n@addtoreset\n\nIntroduction {#seccionuno}\n============\n\nThe" +"---\nabstract: 'Modern classification models tend to struggle when the amount of annotated data is scarce. To overcome this issue, several neural few-shot classification models have emerged, yielding significant progress over time, both in Computer Vision and Natural Language Processing. In the latter, such models used to rely on fixed word embeddings before the advent of transformers. Additionally, some models used in Computer Vision are yet to be tested in NLP applications. In this paper, we compare all these models, first adapting those made in the field of image processing to NLP, and second providing them access to transformers. We then test these models equipped with the same transformer-based encoder on the intent detection task, known for having a large number of classes. Our results reveal that while methods perform almost equally on the ARSC dataset, this is not the case for the Intent Detection task, where the most recent and supposedly best competitors perform worse than older and simpler ones (while all are given access to transformers). We also show that a simple baseline is surprisingly strong. All the new developed models, as well as the evaluation framework, are made publicly available[^1].'\nbibliography:\n- 'anthology.bib'\n- 'mybib.bib'\ntitle: 'A" +"---\nabstract: 'Few-shot image classification consists of two consecutive learning processes: 1) In the meta-learning stage, the model acquires a knowledge base from a set of training classes. 2) During meta-testing, the acquired knowledge is used to recognize unseen classes from very few examples. Inspired by the compositional representation of objects in humans, we train a neural network architecture that explicitly represents objects as a dictionary of shared components and their spatial composition. In particular, during meta-learning, we train a knowledge base that consists of a dictionary of component representations and a dictionary of component activation maps that encode common spatial activation patterns of components. The elements of both dictionaries are shared among the training classes. During meta-testing, the representation of unseen classes is learned using the component representations and the component activation maps from the knowledge base. Finally, an attention mechanism is used to strengthen those components that are most important for each category. We demonstrate the value of our interpretable compositional learning framework for a few-shot classification using miniImageNet, tieredImageNet, CIFAR-FS, and FC100, where we achieve comparable performance.'\nauthor:\n- |\n Ju He$^1$ Adam Kortylewski$^{1,2,3}$ Alan Yuille$^1$\\\n $^1$Johns Hopkins University $^2$Max Planck Institute for Informatics $^3$University of Freiburg" +"---\nabstract: 'The field of DNA nanotechnology has made it possible to assemble, with high yields, different structures that have actionable properties. For example, researchers have created components that can be actuated, used to sense (e.g., changes in pH), or to store and release loads. An exciting next step is to combine these components into multifunctional nanorobots that could, potentially, perform complex tasks like swimming to a target location in the human body, detect an adverse reaction and then release a drug load to stop it. However, as we start to assemble more complex nanorobots, the yield of the desired nanorobot begins to decrease as the number of possible component combinations increases. Therefore, the ultimate goal of this work is to develop a predictive model to maximize yield. However, training predictive models typically requires a large dataset. For the nanorobots we are interested in assembling, this will be difficult to collect. This is because high-fidelity data, which allows us to exactly characterize the shape and size of individual structures, is very time-consuming to collect, whereas low-fidelity data is readily available but only captures bulk statistics for different processes. Therefore, this work combines low- and high-fidelity data to train a generative" +"---\nabstract: 'The layered crystal of EuSn$_2$As$_2$ has a Bi$_2$Te$_3$-type structure in rhombohedral ($R\\bar{3}m$) symmetry and has been confirmed to be an intrinsic magnetic topological insulator at ambient conditions. Combining [*ab initio*]{} calculations and *in-situ* x-ray diffraction measurements, we identify a new monoclinic EuSn$_2$As$_2$ structure in $C2/m$ symmetry above $\\sim$14 GPa. It has a three-dimensional network made up of honeycomb-like Sn sheets and zigzag As chains, transformed from the layered EuSn$_2$As$_2$ via a two-stage reconstruction mechanism with the connecting of Sn-Sn and As-As atoms successively between the buckled SnAs layers. Its dynamic structural stability has been verified by phonon mode analysis. Electrical resistance measurements reveal an insulator-metal-superconductor transition at low temperature around 5 and 15 GPa, respectively, according to the structural conversion, and the superconductivity with a *T*${\\rm {_C}}$ value of $\\sim 4$ K is observed up to 30.8 GPa. These results establish a high-pressure EuSn$_2$As$_2$ phase with intriguing structural and electronic properties and expand our understandings about the layered magnetic topological insulators.'\nauthor:\n- Lin Zhao\n- Changjiang Yi\n- 'Chang-Tian Wang'\n- Zhenhua Chi\n- Yunyu Yin\n- Xiaoli Ma\n- Jianhong Dai\n- Pengtao Yang\n- Binbin Yue\n- Jinguang Cheng\n- Fang Hong\n- 'Jian-Tao Wang'" +"---\nabstract: 'The primary goal of the Carnegie Chicago Hubble Program (CCHP) is to calibrate the zero-point of the Type Ia supernova (SN\u00a0Ia) Hubble Diagram through the use of Population II standard candles. So far, the CCHP has measured direct distances to 11 SNe\u00a0Ia, and here we increase that number to 15 with two new TRGB distances measured to NGC\u00a05643 and NGC\u00a01404, for a total of 20 SN\u00a0Ia calibrators. We present resolved, point-source photometry from new Hubble Space Telescope (HST) imaging of these two galaxies in the F814W and F606W bandpasses. From each galaxy\u2019s stellar halo, we construct an F814W-band luminosity function in which we detect an unambiguous edge feature identified as the Tip of the Red Giant Branch (TRGB). For NGC\u00a05643, we find $ \\mu_0 = $[ $ \\distmodGALONEround~ \\pm$ (stat) $\\pm$ (sys)]{}\u00a0mag, and for NGC\u00a01404 we find $ \\mu_0 = $[ $ \\distmodGALTWOround~ \\pm$ (stat) $\\pm$ (sys)]{}\u00a0mag. From a preliminary consideration of the SNe\u00a0Ia in these galaxies, we find increased confidence in the results presented in Paper VIII [@freedman_2019]. The high precision of our TRGB distances enables a significant measurement of the 3D displacement between the Fornax Cluster" +"---\nabstract: 'The explosive growth of bandwidth hungry Internet applications has led to the rapid development of new generation mobile network technologies that are expected to provide broadband access to the Internet in a pervasive manner. For example, 6G networks are capable of providing high-speed network access by exploiting higher frequency spectrum; high-throughout satellite communication services are also adopted to achieve pervasive coverage in remote and isolated areas. In order to enable seamless access, Integrated Satellite-Terrestrial Communication Networks (ISTCN) has emerged as an important research area. ISTCN aims to provide high speed and pervasive network services by integrating broadband terrestrial mobile networks with satellite communication networks. As terrestrial mobile networks began to use higher frequency spectrum (between 3GHz to 40GHz) which overlaps with that of satellite communication (4GHz to 8GHz for C band and 26GHz to 40GHz for Ka band), there are opportunities and challenges. On one hand, satellite terminals can potentially access terrestrial networks in an integrated manner; on the other hand, there will be more congestion and interference in this spectrum, hence more efficient spectrum management techniques are required. In this paper, we propose a new technique to improve spectrum sharing performance by introducing Non-orthogonal Frequency Division Multiplexing" +"---\nabstract: 'Cryptographic protocols have been widely used to protect the user\u2019s privacy and avoid exposing private information. QUIC (Quick UDP Internet Connections), including the version originally designed by Google (GQUIC) and the version standardized by IETF (IQUIC), as alternatives to the traditional HTTP, demonstrate their unique transmission characteristics: based on UDP for encrypted resource transmitting, accelerating web page rendering. However, existing encrypted transmission schemes based on TCP are vulnerable to website fingerprinting (WFP) attacks, allowing adversaries to infer the users\u2019 visited websites by eavesdropping on the transmission channel. Whether GQUIC and IQUIC can effectively resist such attacks is worth investigating. In this paper, we study the vulnerabilities of GQUIC, IQUIC, and HTTPS to WFP attacks from the perspective of traffic analysis. Extensive experiments show that, in the early traffic scenario, GQUIC is the most vulnerable to WFP attacks among GQUIC, IQUIC, and HTTPS, while IQUIC is more vulnerable than HTTPS, but the vulnerability of the three protocols is similar in the normal full traffic scenario. Features transferring analysis shows that most features are transferable between protocols when on normal full traffic scenario, which enable the adversary to use features proven effective on a special protocol efficiently attacking a new" +"---\nabstract: |\n We study the critical level statistics at the many-body localization (MBL) transition region in random spin systems. By employing the inter-sample randomness as indicator, we manage to locate the MBL transition point in both orthogonal and unitary models. We further count the $n$-th order gap ratio distributions at the transition region up to $n=4$, and find they fit well with the short-range plasma model (SRPM) with inverse temperature $%\n \\beta =1$ for orthogonal model and $\\beta =2$ for unitary. These critical level statistics are argued to be universal by comparing results from systems both with and without total $S_{z}$ conservation. We also point out that these critical distributions can emerge from the spectrum of a Poisson ensemble, which indicates the thermal-MBL transition point is more affected by the MBL phase rather than thermal phase.\nauthor:\n- 'Wen-Jia Rao'\ntitle: 'Critical Level Statistics at the Many-Body Localization Transition Region'\n---\n\nIntroduction {#sec1}\n============\n\nThe non-equilibrium phases of matter in isolated quantum systems is a focus of modern condensed matter physics, it is now well-established the existence of two generic phases: a thermal phase and a many-body localized (MBL) phase[@Gornyi2005; @Basko2006]. Physically, a thermal phase is ergodic with extended" +"---\nabstract: 'In September 2020, the Broadband Forum published a new industry standard for measuring network quality. The standard centers on the notion of quality attenuation. Quality attenuation is a measure of the distribution of latency and packet loss between two points connected by a network path. A vital feature of the quality attenuation idea is that we can express detailed application requirements and network performance measurements in the same mathematical framework. Performance requirements and measurements are both modeled as latency distributions. To the best of our knowledge, existing models of the 802.11 WiFi protocol do not permit the calculation of complete latency distributions without assuming steady-state operation. We present a novel model of the WiFi protocol. Instead of computing throughput numbers from a steady-state analysis of a Markov chain, we explicitly model latency and packet loss. Explicitly modeling latency and loss allows for both transient and steady-state analysis of latency distributions, and we can derive throughput numbers from the latency results. Our model is, therefore, more general than the standard Markov chain methods. We reproduce several known results with this method. Using transient analysis, we derive bounds on WiFi throughput under the requirement that latency and packet loss must" +"---\nabstract: 'We compare the star forming main sequence (SFMS) \u2013 both integrated and resolved on 1kpc scales \u2013 between the high-resolution TNG50 simulation of IllustrisTNG and observations from the 3D-HST slitless spectroscopic survey at $z\\sim1$. Contrasting integrated star formation rates (SFRs), we find that the slope and normalization of the star-forming main sequence in TNG50 are quantitatively consistent with values derived by fitting observations from 3D-HST with the [`Prospector`]{}\u00a0Bayesian inference framework. The previous offsets of 0.2-1\u00a0dex between observed and simulated main sequence normalizations are resolved when using the updated masses and SFRs from [`Prospector`]{}. The scatter is generically smaller in TNG50 than in 3D-HST for more massive galaxies with [M$_*$]{}$>10^{10}$[M$_{\\odot}$]{}, even after accounting for observational uncertainties. When comparing resolved star formation, we also find good agreement between TNG50 and 3D-HST: average specific star formation rate (sSFR) radial profiles of galaxies at all masses and radii below, on, and above the SFMS are similar in both normalization and *shape*. Most noteworthy, massive galaxies with [M$_*$]{}$>10^{10.5}$[M$_{\\odot}$]{}, which have fallen below the SFMS due to ongoing quenching, exhibit a clear central SFR suppression, in both TNG50 and 3D-HST. In TNG this inside-out quenching is due to the supermassive black hole" +"---\nabstract: 'Channel pruning is formulated as a neural architecture search (NAS) problem recently. However, existing NAS-based methods are challenged by huge computational cost and inflexibility of applications. How to deal with multiple sparsity constraints simultaneously and speed up NAS-based channel pruning are still open challenges. In this paper, we propose a novel Accurate and Automatic Channel Pruning (AACP) method to address these problems. Firstly, AACP represents the structure of a model as a structure vector and introduces a pruning step vector to control the compressing granularity of each layer. Secondly, AACP utilizes Pruned Structure Accuracy Estimator (PSAE) to speed up the performance estimation process. Thirdly, AACP proposes Improved Differential Evolution (IDE) algorithm to search the optimal structure vector effectively. Because of IDE, AACP can deal with FLOPs constraint and model size constraint simultaneously and efficiently. Our method can be easily applied to various tasks and achieve state of the art performance. On CIFAR10, our method reduces $65\\%$ FLOPs of ResNet110 with an improvement of $0.26\\%$ top-1 accuracy. On ImageNet, we reduce $42\\%$ FLOPs of ResNet50 with a small loss of $0.18\\%$ top-1 accuracy and reduce $30\\%$ FLOPs of MobileNetV2 with a small loss of $0.7\\%$ top-1 accuracy. The source" +"---\nabstract: '**Bohmian mechanics was designed to give rise to predictions identical to those derived by standard quantum mechanics, while invoking a specific interpretation of it \u2013 one which allows the classical notion of a particle to be maintained alongside a guiding wave. For this, the Bohmian model makes use of a unique *quantum potential* which governs the trajectory of the particle. In this work we show that this interpretation of quantum theory naturally leads to the derivation of interesting new phenomena. Specifically, we demonstrate how the fundamental Casimir-Polder force, by which atoms are attracted to a surface, may be temporarily suppressed by utilizing a specially designed quantum potential. We show that when harnessing the quantum potential via a suitable atomic wavepacket engineering, the absorption by the surface can be dramatically reduced. This is proven both analytically and numerically. Finally, an experimental scheme is proposed for achieving the required shape for the atomic wavepacket. All these may enable new insights into Bohmian mechanics as well as new applications to metrology and sensing.**'\nauthor:\n- 'G. Amit'\n- 'Y. Japha'\n- 'T. Shushi'\n- 'R. Folman'\n- 'E. Cohen'\ntitle: Countering a fundamental law of attraction with quantum wavepacket engineering\n---" +"---\nabstract: 'We compare the accuracy, convergence rate and computational cost of eigenerosion (EE) and phase-field (PF) methods. For purposes of comparison, we specifically consider the standard test case of a center-crack panel loaded in biaxial tension and assess the convergence of the energy error as the length scale parameter and mesh size tend to zero simultaneously. The panel is discretized by means of a regular mesh consisting of standard bilinear or $\\mathbb{Q}$1 elements. The exact stresses from the known analytical linear elastic solution are applied to the boundary. All element integrals over the interior and the boundary of the domain are evaluated exactly using the symbolic computation program Mathematica. When the EE inelastic energy is enhanced by means of Richardson extrapolation, EE is found to converge at twice the rate of PF and to exhibit much better accuracy. In addition, EE affords a one-order-of-magnitude computational speed-up over PF.'\naddress:\n- ' ${}^1$Dipartimento di Ingegneria Civile e Ambientale, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano, Italy. '\n- ' ${}^2$Department of Mechanical Engineering, Universit[\u00e4]{}t Siegen, 57068 Siegen, Germany. '\n- ' ${}^3$Division of Engineering and Applied Science, California Institute of Technology, 1200 E.\u00a0California Blvd., Pasadena, CA" +"---\nabstract: 'In the present paper, several types of efficiency conditions are established for vector optimization problems with cone constraints affected by uncertainty, but with no information of stochastic nature about the uncertain data. Following a robust optimization approach, data uncertainty is faced by handling set-valued inclusion problems. The employment of recent results about error bounds and tangential approximations of the solution set to the latter enables one to achieve necessary conditions for weak efficiency via a penalization method as well as via the modern revisitation of the Euler-Lagrange method, with or without generalized convexity assumptions. The presented conditions are formulated in terms of various nonsmooth analysis constructions, expressing first-order approximations of mappings and sets, while the metric increase property plays the role of a constraint qualification.'\nauthor:\n- \ntitle: 'On some efficiency conditions for vector optimization problems with uncertain cone constraints: a robust approach'\n---\n\nVector optimization problem; data uncertainty; robust approach; weak efficiency condition; generalized derivative; generalized convexity.\n\nIntroduction\n============\n\nConsider a vector optimization problem $${{\\rm Min}_K}f(x) \\ \\hbox{\\ subject to $x\\in{{\\mathcal R}}$},\n \\leqno {({\\mathcal P})}$$ where ${{\\mathcal R}}\\subseteq{\\mathbb X}$ is a decision set defining the feasible region of the problem, $f:{\\mathbb X}\\longrightarrow{\\mathbb Y}$ represents the criterion with" +"---\nabstract: 'The increasing complexity of algorithms for analyzing medical data, including de-identification tasks, raises the possibility that complex algorithms are learning not just the general representation of the problem, but specifics of given individuals within the data. Modern legal frameworks specifically prohibit the intentional or accidental distribution of patient data, but have not addressed this potential avenue for leakage of such protected health information. Modern deep learning algorithms have the highest potential of such leakage due to complexity of the models. Recent research in the field has highlighted such issues in non-medical data, but all analysis is likely to be data and algorithm specific. We, therefore, chose to analyze a state-of-the-art free-text de-identification algorithm based on LSTM (Long Short-Term Memory) and its potential in encoding any individual in the training set. Using the i2b2 Challenge Data, we trained, then analyzed the model to assess whether the output of the LSTM, before the compression layer of the classifier, could be used to estimate the membership of the training data. Furthermore, we used different attacks including membership inference attack method to attack the model. Results indicate that the attacks could not identify whether members of the training data were distinguishable from" +"---\nauthor:\n- Mika Saajasto\n- Mika Juvela\n- Charl\u00e8ne Lef\u00e8vre\n- Laurent Pagani\n- Nathalie Ysard\nbibliography:\n- 'bibli.bib'\ndate: 'Received day month year / Accepted day month year'\ntitle: 'Multi-wavelength observations and modelling of a quiescent cloud LDN1512'\n---\n\nIntroduction\n============\n\nUnderstanding how stars are formed is one of the crucial questions in astronomy. The Herschel space observatory has provided us with detailed observations of nearby molecular clouds and shown that star forming regions have vastly diverse morphologies, from dynamically active filamentary fields to more quiescent clouds with simple geometries [@Molinari2010; @Menshchikov2010; @Juvela2012]. These far infrared (FIR) observations can be used to derive column density estimates and to study possible variations in dust properties. However, these studies are limited by our understanding of the emission properties of the grains, in particular the dust opacity and to a lesser degree the dust opacity spectral index.\n\nThe light scattered by dust grains at near-infrared (NIR) and mid-infrared (MIR) wavelengths can be studied and analysed without explicit assumptions of the FIR thermal emission properties of the grains, thus the scattering observations can be used to place additional constraints on dust properties and the density distribution. @Lehtinen1996 were the first to study" +"---\nabstract: 'Power consumption is one of the major issues in massive MIMO (multiple input multiple output) systems, causing increased long-term operational cost and overheating issues. In this paper, we consider per-antenna power allocation with a given finite set of power levels towards maximizing the long-term energy efficiency of the multi-user systems, while satisfying the QoS (quality of service) constraints at the end users in terms of required SINRs (signal-to-interference-plus-noise ratio), which depends on channel information. Assuming channel states to vary as a Markov process, the constraint problem is modeled as an unconstraint problem, followed by the power allocation based on $Q$-learning algorithm. Simulation results are presented to demonstrate the successful minimization of power consumption while achieving the SINR threshold at users.'\nauthor:\n- |\n Navneet Garg, Mathini Sellathurai$^{\\dagger}$, Tharmalingam Ratnarajah\\\n The University of Edinburgh, UK, $^{\\dagger}$Heriot-Watt university, Edinburgh, UK.\nbibliography:\n- 'pa1.bib'\ntitle: 'Reinforcement Learning based Per-antenna Discrete Power Control for Massive MIMO Systems[^1]'\n---\n\nIntroduction\n============\n\nMassive MIMO systems are the central part of 5G and next generation wireless networks. Due to large number of antennas in the array, the increased power consumption i.e. reduced energy efficiency (EE), causes increased operational cost and overheating problems which leads to" +"---\nabstract: 'Bhargava, Hanke, and Shankar have recently shown that the asymptotic average $2$-torsion subgroup size of the family of class groups of monogenized cubic fields with positive and negative discriminants is $3/2$ and $2$, respectively. In this paper, we provide strong computational evidence for these asymptotes. We then develop a pair of novel conjectures that predicts, for $p$ prime, the asymptotic average $p$-torsion subgroup size in class groups of monogenized cubic fields.'\nauthor:\n- Mikaeel Yunus\ntitle: 'Asymptotics of $p$-torsion subgroup sizes in class groups of monogenized cubic fields'\n---\n\nIntroduction\n============\n\nAsymptotics of class groups of number fields over the rationals have been studied for hundreds of years by a wide variety of mathematicians, including Gauss [@G] in the 18th century; Davenport, Heilbronn, Cohen, Lenstra, and Martinet [@DH; @CL; @CM] in the late 20th century; and Bhargava, Fouvry, and Kl\u00fcners in the 21st century [@B; @FK]. In 2014, Bhargava and Varma [@BV] showed that the asymptotic average $2$-torsion subgroup size of the family of class groups of cubic fields ordered by discriminant remains the same regardless of any local conditions imposed on these cubic fields. In 2018, Ho, Shankar, and Varma [@HSV] showed that the same average size" +"---\nabstract: 'Polyatomic molecules in strong laser fields can undergo substantial nuclear motion within tens of femtoseconds. Ion imaging methods based on dissociation or Coulomb explosion therefore have difficulty faithfully recording the geometry dependence of the field ionization that initiates the dissociation process. Here we compare the strong-field double ionization and subsequent dissociation of water (both H$_2$O and D$_2$O) in 10-fs and 40-fs 800-nm laser pulses. We find that 10-fs pulses turn off before substantial internuclear motion occurs, whereas rapid internuclear motion can take place during the double ionization process for 40-fs pulses. The short-pulse measurements are consistent with a simple tunnel ionization picture, whose predictions help interpret the motion observed in the long-pulse measurements.'\nauthor:\n- 'A. J. Howard'\n- 'C. Cheng'\n- 'R. Forbes'\n- 'G. A. McCracken'\n- 'W. H. Mills'\n- 'V. Makhija'\n- 'M. Spanner'\n- 'T. Weinacht'\n- 'P. H. Bucksbaum'\nbibliography:\n- 'main.bib'\ntitle: |\n Strong Field Ionization of Water:\\\n Nuclear Dynamics Revealed by Varying the Pulse Duration\n---\n\n\\[sec:Introduction\\]Introduction\n================================\n\nThe momentum distribution of ionic fragments following rapid stripping of valence electrons is often used to reconstruct the nuclear geometry of molecules immediately before dissociation [@vager_coulomb_1989]. This technique was originally developed with" +"---\nabstract: 'Massive black holes (BHs) in dwarf galaxies can provide strong constraints on BH seeds, however reliably detecting them is notoriously difficult. High resolution radio observations were recently used to identify accreting massive BHs in nearby dwarf galaxies, with a significant fraction found to be non-nuclear. Here we present the first results of our optical follow-up of these radio-selected active galactic nuclei (AGNs) in dwarf galaxies using integral field unit (IFU) data from Gemini-North. We focus on the dwarf galaxy J1220+3020, which shows no clear optical AGN signatures in its nuclear SDSS spectrum covering the radio source. With our new IFU data, we confirm the presence of an active BH via the AGN coronal line \\[\\] and enhanced \\[\\] emission coincident with the radio source. Furthermore, we detect broad H$\\alpha$ emission and estimate a BH mass of $M_{\\rm BH}=10^{4.9}M_\\odot$. We compare the narrow emission line ratios to standard BPT diagnostics and shock models. Spatially-resolved BPT diagrams show some AGN signatures, particularly in \\[\\]/H$\\alpha$, but overall do not unambiguously identify the AGN. A comparison of our data to shock models clearly indicates shocked emission surrounding the AGN. The physical model most consistent with the data is an active BH with" +"---\nabstract: 'Coupling a system to a nonthermal environment can profoundly affect the phase diagram of the closed system, giving rise to a special class of dissipation-induced phase transitions. Such transitions take the system out of its ground state and stabilize a higher-energy stationary state, rendering it the sole attractor of the dissipative dynamics. In this paper, we present a unifying methodology, which we use to characterize this ubiquitous phenomenology and its implications for the open system dynamics. Specifically, we analyze the closed system\u2019s phase diagram, including symmetry-broken phases, and explore their corresponding excitations\u2019 spectra. Opening the system, the environment can overwhelm the system\u2019s symmetry-breaking tendencies, and changes its order parameter. As a result, isolated distinct phases of similar order become connected, and new phase-costability regions appear. Interestingly, the excitations differ in the newly connected regions through a change in their symplectic norm, which is robust to the introduction of dissipation. As a result, by tuning the system from one phase to the other across the dissipation-stabilized region, the open system fluctuations exhibit an exceptional point like scenario, where the fluctuations become overdamped, only to reappear with an opposite sign in the dynamical response function of the system. The overdamped" +"---\nabstract: 'Neutral atom arrays are promising for large-scale quantum computing especially because it is possible to prepare large-scale qubit arrays. An unsolved issue is how to selectively excite one qubit deep in a 3D atomic array to Rydberg states. In this work, we show two methods for this purpose. The first method relies on a well-known result: in a dipole transition between two quantum states driven by two off-resonant fields of equal strength but opposite detunings $\\pm\\Delta$, the transition is characterized by two counter-rotating Rabi frequencies $\\Omega e^{\\pm i\\Delta t}$\u00a0\\[or $\\pm\\Omega e^{\\pm i\\Delta t}$ if the two fields have a $\\pi$-phase difference\\]. This pair of detuned fields lead to a time-dependent Rabi frequency $2\\Omega \\cos(\\Delta t)$\u00a0\\[or $2i\\Omega \\sin(\\Delta t)$\\], so that a full transition between the two levels is recovered. We show that when the two detuned fields are sent in different directions, one atom in a 3D optical lattice can be selectively addressed for Rydberg excitation, and when its state is restored, the state of any nontarget atoms irradiated in the light path is also restored. Moreover, we find that the Rydberg excitation by this method can significantly suppress the fundamental blockade error of a Rydberg" +"---\nabstract: 'Inferring dynamics from time series is an important objective in data analysis. In particular, it is challenging to infer stochastic dynamics given incomplete data. We propose an expectation maximization (EM) algorithm that iterates between alternating two steps: E-step restores missing data points, while M-step infers an underlying network model from the restored data. Using synthetic data of a kinetic Ising model, we confirm that the algorithm works for restoring missing data points as well as inferring the underlying model. At the initial iteration of the EM algorithm, the model inference shows better model-data consistency with observed data points than with missing data points. As we keep iterating, however, missing data points show better model-data consistency. We find that demanding equal consistency of observed and missing data points provides an effective stopping criterion for the iteration to prevent going beyond the most accurate model inference. [Using the EM algorithm and the stopping criterion together]{}, we infer missing data points from a time-series data of real neuronal activities. [Our method reproduces collective properties of neuronal activities such as correlations and firing statistics even when 70% of data points are masked as missing points]{}.'\nauthor:\n- Sangwon Lee\n- Vipul Periwal" +"---\nabstract: 'The clear understanding of the non-convex landscape of neural network is a complex incomplete problem. This paper studies the landscape of linear (residual) network, the simplified version of the nonlinear network. By treating the gradient equations as polynomial equations, we use algebraic geometry tools to solve it over the complex number field, the attained solution can be decomposed into different irreducible complex geometry objects. Then three hypotheses are proposed, involving how to calculate the loss on each irreducible geometry object, the losses of critical points have a certain range and the relationship between the dimension of each irreducible geometry object and strict saddle condition. Finally, numerical algebraic geometry is applied to verify the rationality of these three hypotheses which further clarify the landscape of linear network and the role of residual connection.'\nbibliography:\n- 'icdp2009.bib'\nnocite: '[@langley00]'\ntitle: 'The Landscape of Multi-Layer Linear Neural Network From the Perspective of Algebraic Geometry'\n---\n\nIntroduction {#sec:intro}\n============\n\nThe commonly used deep neural network with non-convex loss surface brings significant improvement to many practical applications[@krizhevsky2012imagenet]. The difficulty of non-convex optimization was manifest in the practical development of early neural networks [@blum1992training]. In the past few years, with the introduction of some" +"---\nabstract: 'Point clouds, being the simple and compact representation of surface geometry of 3D objects, have gained increasing popularity with the evolution of deep learning networks for classification and segmentation tasks. Unlike human, teaching the machine to analyze the segments of an object is a challenging task and quite essential in various machine vision applications. In this paper, we address the problem of segmentation and labelling of the 3D point clouds by proposing a inception based deep network architecture called PIG-Net, that effectively characterizes the local and global geometric details of the point clouds. In PIG-Net, the local features are extracted from the transformed input points using the proposed inception layers and then aligned by feature transform. These local features are aggregated using the global average pooling layer to obtain the global features. Finally, feed the concatenated local and global features to the convolution layers for segmenting the 3D point clouds. We perform an exhaustive experimental analysis of the PIG-Net architecture on two state-of-the-art datasets, namely, ShapeNet\u00a0[@Yi:2016:ShapeNet] and PartNet\u00a0[@Mo:2019:PartNet]. We evaluate the effectiveness of our network by performing ablation study.'\naddress:\n- 'IIIT Hyderabad, India'\n- 'KLE Technological University, Hubballi, India'\nauthor:\n- Sindhu\n- Shankar\nbibliography:" +"---\nabstract: 'Robust physics (e.g., governing equations and laws) discovery is of great interest for many engineering fields and explainable machine learning. A critical challenge compared with general training is that the term and format of governing equations is not known as a prior. In addition, significant measurement noise and complex algorithm hyperparameter tuning usually reduces the robustness of existing methods. A robust data-driven method is proposed in this study for identifying the governing Partial Differential Equations (PDEs) of a given system from noisy data. The proposed method is based on the concept of Progressive Sparse Identification of PDEs (PSI-PDE or $\\psi$-PDE). Special focus is on the handling of data with huge uncertainties (e.g., 50$\\%$ noise level). Neural Network modeling and fast Fourier transform (FFT) are implemented to reduce the influence of noise in sparse regression. Following this, candidate terms from the prescribed library are progressively selected and added to the learned PDEs, which automatically promotes parsimony with respect to the number of terms in PDEs as well as their complexity. Next, the significance of each learned terms is further evaluated and the coefficients of PDE terms are optimized by minimizing the L2 residuals. Results of numerical case studies indicate" +"---\nabstract: 'Following an eruptive accretion event in -MM1, flares in the various maser species, including water masers, were triggered. We report the observed relative proper motion of the highly variable water masers associated with the massive star-forming region, . High velocity H$_2$O maser proper motions were detected in 5 maser clusters, CM2-W2 (bow-shock structure), MM1-W1, MM1-W3, UCHII-W1 and UCHII-W3. The overall average of the derived relative proper motion is 85 . This mean proper motion is in agreement with the previous results from VLA multi-epoch observations. Our position and velocity variance and co-variance matrix analyses of the maser proper motions show its major axis to have a position angle of $-79.4^\\circ$, cutting through the dust cavity around MM1B and aligned in the northwest-southeast direction. We interpret this as the axis of the jet driving the CM2 shock and the maser motion. The complicated proper motions in MM1-W1 can be explained by the combined influence of the MM1 northeast-southwest bipolar outflow, CS(6-5) north-south collimated bipolar outflow, and the radio jet. The relative proper motions of the H$_2$O masers in UCHII-W1 are likely not driven by the jets of MM1B protostar but by MM3-UCHII. Overall, the post-accretion burst relative proper motions" +"---\nauthor:\n- 'Ming H. Xu'\n- Susanne Lunz\n- 'James M. Anderson'\n- Tuomas Savolainen\n- Nataliya Zubko\n- Harald Schuh\nbibliography:\n- 'gaia\\_crf.bib'\ndate: 'Received \\*\\*\\*; accepted \\*\\*\\*'\ntitle: 'Evidence of the $Gaia$\u2013VLBI position differences being related to radio source structure'\n---\n\n[We report the relationship between the $Gaia$\u2013VLBI position differences and the magnitudes of source structure effects in VLBI observations.]{} [Because the $Gaia$\u2013VLBI position differences are statistically significant for a considerable number of common sources, we attempt to discuss and explain these position differences based on VLBI observations and available source images at cm-wavelengths. ]{} [Based on the derived closure amplitude root-mean-square (CARMS), which quantifies the magnitudes of source structure effects in the VLBI observations used for building the third realization of the International Celestial Reference Frame, the arc lengths and normalized arc lengths of the position differences are examined in detail. The radio jet directions and the directions of the $Gaia$\u2013VLBI position differences are investigated for a small sample of sources. ]{} [ Both the arc lengths and normalized arc lengths of the $Gaia$ and VLBI positions are found to increase with the CARMS values. The majority of the sources with statistically significant position differences are" +"---\nabstract: 'We present a framework for simulating realistic inverse synthetic aperture radar images of automotive targets at millimeter wave frequencies. The model incorporates radar scattering phenomenology of commonly found vehicles along with range-Doppler based clutter and receiver noise. These images provide insights into the physical dimensions of the target, the number of wheels and the trajectory undertaken by the target. The model is experimentally validated with measurement data gathered from an automotive radar. The images from the simulation database are subsequently classified using both traditional machine learning techniques as well as deep neural networks based on transfer learning. We show that the ISAR images offer a classification accuracy above 90% and are robust to both noise and clutter.'\nauthor:\n- 'Neeraj\u00a0Pandey,\u00a0, Shobha\u00a0Sundar\u00a0Ram,\u00a0 [^1]'\nbibliography:\n- 'main.bib'\ntitle: Classification Of Automotive Targets Using Inverse Synthetic Aperture Radar Images\n---\n\nat (current page.south) ;\n\nISAR, classification, automotive radar, transfer learning, radar database\n\nIntroduction\n============\n\nWith the advent of advanced driver assistance systems (ADAS), automotive radars are becoming increasingly common on cars for improving road driving conditions. These radars are used for multiple applications such as automatic cruise control, pedestrian detection, cross-traffic alert, blind-spot detection, and parking assistance" +"---\nabstract: 'Proper scoring rules are commonly applied to quantify the accuracy of distribution forecasts. Given an observation they assign a scalar score to each distribution forecast, with the the lowest expected score attributed to the true distribution. The energy and variogram scores are two rules that have recently gained some popularity in multivariate settings because their computation does not require a forecast to have parametric density function and so they are broadly applicable. Here we conduct a simulation study to compare the discrimination ability between the energy score and three variogram scores. Compared with other studies, our simulation design is more realistic because it is supported by a historical data set containing commodity prices, currencies and interest rates, and our data generating processes include a diverse selection of models with different marginal distributions, dependence structure, and calibration windows. This facilitates a comprehensive comparison of the performance of proper scoring rules in different settings. To compare the scores we use three metrics: the mean relative score, error rate and a generalised discrimination heuristic. Overall, we find that the variogram score with parameter $p=0.5$ outperforms the energy score and the other two variogram scores.'\nauthor:\n- 'C. Alexander$\\dag^*$, M. Coulon$\\dag $," +"---\nabstract: 'Shared practices to assess the diversity of retrieval system results are still debated in the Information Retrieval community, partly because of the challenges of determining what diversity means in specific scenarios, and of understanding how diversity is perceived by end-users. The field of Music Information Retrieval is not exempt from this issue. Even if fields such as Musicology or Sociology of Music have a long tradition in questioning the representation and the impact of diversity in cultural environments, such knowledge has not been yet embedded into the design and development of music technologies. In this paper, focusing on electronic music, we investigate the characteristics of listeners, artists, and tracks that are influential in the perception of diversity. Specifically, we center our attention on 1) understanding the relationship between perceived diversity and computational methods to measure diversity, and 2) analyzing how listeners\u2019 domain knowledge and familiarity influence such perceived diversity. To accomplish this, we design a user-study wherein listeners are asked to compare pairs of lists of tracks and artists, and to select the most diverse list from each pair. We compare participants\u2019 ratings with results obtained through computational models built using audio tracks\u2019 features and artist attributes. We" +"---\nabstract: 'We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers. Our model consists of an encoder, a decoder and a classifier. The encoder learns a non-linear subspace shared between the input data modalities. The classifier and the decoder act as regularizers to ensure that the low-dimensional encoding captures predictive differences between patients and controls. We use a learnable dropout layer to extract interpretable biomarkers from the data, and our unique training strategy can easily accommodate missing data modalities across subjects. We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data. Using 10-fold cross validation, we demonstrate that our model achieves better classification accuracy than baseline methods, and that this performance generalizes to a second dataset collected at a different site. In an exploratory analysis we further show that the biomarkers identified by our model are closely associated with the well-documented deficits in schizophrenia.'\nauthor:\n- Sayan Ghosal\n- Qiang Chen\n- Giulio Pergola\n- 'Aaron L. Goldman'\n- William Ulrich\n- 'Karen F. Berman'\n- Giuseppe Blasi\n- Leonardo Fazio\n- Antonio" +"---\nabstract: 'The adaptation of pretrained language models to solve supervised tasks has become a baseline in NLP, and many recent works have focused on studying how linguistic information is encoded in the pretrained sentence representations. Among other information, it has been shown that entire syntax trees are implicitly embedded in the geometry of such models. As these models are often fine-tuned, it becomes increasingly important to understand how the encoded knowledge evolves along the fine-tuning. In this paper, we analyze the evolution of the embedded syntax trees along the fine-tuning process of BERT for six different tasks, covering all levels of the linguistic structure. Experimental results show that the encoded syntactic information is forgotten (PoS tagging), reinforced (dependency and constituency parsing) or preserved (semantics-related tasks) in different ways along the fine-tuning process depending on the task.'\nauthor:\n- |\n Laura P\u00e9rez-Mayos^1^, Roberto Carlini^1^, Miguel Ballesteros^2^, Leo Wanner^3,1^\\\n ^1^ TALN Research Group, Pompeu Fabra University, Barcelona, Spain\\\n ^2^ Amazon AI\\\n ^3^ Catalan Institute for Research and Advanced Studies (ICREA), Barcelona, Spain\\\n `{laura.perezm\\midroberto.carlini\\midleo.wanner}@upf.edu`\\\n `ballemig@amazon.com`\nbibliography:\n- 'eacl2021.bib'\ntitle: 'On the Evolution of Syntactic Information Encoded by BERT\u2019s Contextualized Representations'\n---\n\nIntroduction {#sec:intro}\n============\n\nAdapting unsupervised pretrained language models (LMs) to solve" +"---\nabstract: 'Bose polarons, quasi-particles composed of mobile impurities surrounded by cold Bose gas, can experience strong interactions mediated by the many-body environment and form bipolaron bound states. Here we present a detailed study of heavy polarons in a one-dimensional Bose gas by formulating a non-perturbative theory and complementing it with exact numerical simulations. We develop an analytic approach for weak boson-boson interactions and arbitrarily strong impurity-boson couplings. Our approach is based on a mean-field theory that accounts for deformations of the superfluid by the impurities and in this way minimizes quantum fluctuations. The mean-field equations are solved exactly in Born-Oppenheimer (BO) approximation leading to an analytic expression for the interaction potential of heavy polarons which is found to be in excellent agreement with quantum Monte-Carlo (QMC) results. In the strong-coupling limit the potential substantially deviates from the exponential form valid for weak coupling and has a linear shape at short distances. Taking into account the leading-order Born-Huang corrections we calculate bipolaron binding energies for impurity-boson mass ratios as low as 3 and find excellent agreement with QMC results.'\nauthor:\n- 'M.\u00a0Will'\n- 'G. E. Astrakharchik'\n- 'M.\u00a0Fleischhauer'\nbibliography:\n- 'library.bib'\ntitle: 'Polaron Interactions and Bipolarons in One-Dimensional" +"---\nabstract: 'The subleading eigenvalues and associated eigenfunctions of the Perron-Frobenius operator for 2-dimensional area-preserving maps are numerically investigated. We closely examine the validity of the so-called Ulam method, a numerical scheme believed to provide eigenvalues and eigenfunctions of the Perron-Frobenius operator, both for linear and nonlinear maps on the torus. For the nonlinear case, the second-largest eigenvalues and the associated eigenfunctions of the Perron-Frobenius operator are investigated by calculating the Fokker-Planck operator with sufficiently small diffusivity. On the basis of numerical schemes thus established, we find that eigenfunctions for the subleading eigenvalues exhibit spatially inhomogeneous patterns, especially showing localization around the region where unstable manifolds are sparsely running. Finally, such spatial patterns of the eigenfunction are shown to be very close to the distribution of the maximal finite-time Lyapunov exponents.'\naddress:\n- '$^1$ Department of Physics, Tokyo Metropolitan University, Minami-Osawa, Hachioji, Tokyo 192-0397, Japan'\n- '$^2$ Institute for Applied Systems Analysis, Jiangsu University, 212013 Zhenjiang, China'\nauthor:\n- 'Kensuke Yoshida$^{1,2}$, Hajime Yoshino$^1$, Akira Shudo$^1$ and Domenico Lippolis$^2$'\nbibliography:\n- 'reference.bib'\ntitle: 'Eigenfunctions of the Perron-Frobenius operator and the finite-time Lyapunov exponents in uniformly hyperbolic area-preserving maps'\n---\n\n=1\n\nIntroduction {#sec:introduction}\n============\n\nUniformly hyperbolic systems, or, more loosely, strongly chaotic" +"---\nabstract: 'We propose an analytical framework based on stochastic geometry (SG) formulations to estimate a radar\u2019s detection performance under generalized discrete clutter conditions. We model the spatial distribution of discrete clutter scatterers as a homogeneous Poisson point process and the radar cross-section of each extended scatterer as a random variable of the Weibull distribution. Using this framework, we derive a metric called the radar detection coverage probability as a function of radar parameters such as transmitted power, system noise temperature and radar bandwidth; and clutter parameters such as clutter density and mean clutter cross-section. We derive the optimum radar bandwidth for maximizing this metric under noisy and cluttered conditions. We also derive the peak transmitted power beyond which there will be no discernible improvement in radar detection performance due to clutter limited conditions. When both transmitted power and bandwidth are fixed, we show how the detection threshold can be optimized for best performance. We experimentally validate the SG results with a hybrid of Monte Carlo and full wave electromagnetic solver based simulations using finite difference time domain (FDTD) techniques.'\nauthor:\n- 'Shobha\u00a0Sundar\u00a0Ram,\u00a0, Gaurav\u00a0Singh, and Gourab\u00a0Ghatak,\u00a0 [^1]'\nbibliography:\n- 'main.bib'\ntitle: Optimization of Radar Parameters" +"---\nabstract: 'Since the inception of Android OS, smartphones sales have been growing exponentially, and today it enjoys the monopoly in the smartphone marketplace. The widespread adoption of Android smartphones has drawn the attention of malware designers, which threatens the Android ecosystem. The current state-of-the-art Android malware detection systems are based on machine learning and deep learning models. Despite having superior performance, these models are susceptible to adversarial attack. Therefore in this paper, we developed eight Android malware detection models based on machine learning and deep neural network and investigated their robustness against the adversarial attacks. For the purpose, we created new variants of malware using Reinforcement Learning, which will be misclassified as benign by the existing Android malware detection models. We propose two novel attack strategies, namely single policy attack and multiple policy attack using reinforcement learning for white-box and grey-box scenario respectively. Putting ourselves in adversary\u2019 shoes, we designed adversarial attacks on the detection models with the goal of maximising fooling rate, while making minimum modifications to the Android application and ensuring that the app\u2019s functionality and behaviour does not change. We achieved an average fooling rate of $44.21\\%$ and $53.20\\%$ across all the eight detection models with" +"---\nabstract: 'Extractive summarization suffers from irrelevance, redundancy and incoherence. Existing work shows that abstractive rewriting for extractive summaries can improve the conciseness and readability. These rewriting systems consider extracted summaries as the only input, which is relatively focused but can lose important background knowledge. In this paper, we investigate contextualized rewriting, which ingests the entire original document. We formalize contextualized rewriting as a seq2seq problem with group alignments, introducing group tag as a solution to model the alignments, identifying extracted summaries through content-based addressing. Results show that our approach significantly outperforms non-contextualized rewriting systems without requiring reinforcement learning, achieving strong improvements on ROUGE scores upon multiple extractive summarizers.'\nauthor:\n- 'Guangsheng Bao^1,2^, Yue Zhang^1,2^[^1]\\'\n- 'Author Name\\'\n- 'First Author Name,^1^ Second Author Name, ^2^ Third Author Name ^1^\\'\nbibliography:\n- 'aaai21.bib'\ntitle:\n- Contextualized Rewriting for Text Summarization\n- 'My Publication Title \u2014 Single Author'\n- Contextualized Rewriting for Text Summarization\n---\n\nIntroduction\n============\n\nExtractive text summarization systems [@Nallapati2017; @Narayan2018; @Liu2019] work by identifying salient text segments (typically sentences) from an input document as its summary. They have been shown to outperform abstractive systems [@Rush2015; @Nallapati2016; @Chopra2016] in terms of content selection and faithfulness to the input. However," +"---\nabstract: 'Robust online estimation of oscillation frequency belongs to classical problems of system identification and adaptive control. The given harmonic signal can be noisy and with varying amplitude at the same time, as in the case of damped vibrations. A novel robust frequency-estimation algorithm is proposed here, motivated by the existing globally convergent frequency estimator. The advantage of the proposed estimator is in requiring one design parameter only and being robust against measurement noise and initial conditions. The proven global convergence also allows for slowly varying amplitudes, which is useful for applications with damped oscillations or additionally shaped harmonic signals. The proposed analysis is simple and relies on an averaging theory of the periodic signals. Our results show an exponential convergence rate, which depends, analytically, on the sought frequency, adaptation gain and oscillation amplitude. Numerical and experimental examples demonstrate the robustness and efficiency of the proposed estimator for signals with slowly varying amplitude and noise.'\naddress: 'University of Agder, 4604-Norway'\nauthor:\n- Michael Ruderman\nbibliography:\n- 'references.bib'\ntitle: 'One-parameter robust global frequency estimator for slowly varying amplitude and noisy oscillations'\n---\n\nFrequency estimation ,adaptive notch filter ,robust estimator ,identification algorithm\n\n\\[thm\\][Lemma]{}\n\nIntroductory note {#sec:1}\n=================\n\nA common problem associated" +"---\nabstract: 'We demonstrate theoretically and experimentally that a specifically designed microcavity driven in the optical parametric oscillation regime exhibits lighthouse-like emission, i.e., an emission focused around a single direction. Remarkably, the emission direction of this micro-lighthouse is continuously controlled by the linear polarization of the incident laser, and angular beam steering over is demonstrated. Theoretically, this unprecedented effect arises from the interplay between the nonlinear optical response of microcavity exciton-polaritons, the difference in the subcavities forming the microcavity, and the rotational invariance of the device.'\nauthor:\n- 'Samuel M.H. Luk'\n- Hadrien Vergnet\n- Ombline Lafont\n- Przemyslaw Lewandowski\n- 'Nai H. Kwong'\n- Elisabeth Galopin\n- Aristide Lemaitre\n- Philippe Roussignol\n- J\u00e9r\u00f4me Tignon\n- Stefan Schumacher\n- Rolf Binder\n- Emmanuel Baudin\nbibliography:\n- 'lighthouse.bib'\ntitle: 'All-optical beam steering using the polariton lighthouse effect'\n---\n\nIntroduction\n============\n\nLighthouses have been used for millennia to inform ships on their relative position on the sea. The lighthouse design possesses two advantages (Fig.\u00a0\\[fig:Fig1\\]a): Its highly directive radiation pattern allows limiting the required power to reach remote locations, and the dynamic control of the emission allows converting spatial information into time information, and vice versa. Such design is used in" +"---\nabstract: 'We propose a new approach for paragraph recognition in document images by spatial graph convolutional networks (GCN) applied on OCR text boxes. Two steps, namely line splitting and line clustering, are performed to extract paragraphs from the lines in OCR results. Each step uses a $\\beta$-skeleton graph constructed from bounding boxes, where the graph edges provide efficient support for graph convolution operations. With pure layout input features, the GCN model size is 3$\\sim$4 orders of magnitude smaller compared to R-CNN based models, while achieving comparable or better accuracies on PubLayNet and other datasets. Furthermore, the GCN models show good generalization from synthetic training data to real-world images, and good adaptivity for variable document styles.'\nauthor:\n- |\n Renshen Wang, Yasuhisa Fujii, Ashok Popat\\\n Google Research\\\n [{rewang, yasuhisaf, popat}@google.com]{}\nbibliography:\n- 'egpaper.bib'\nnocite:\n- '[@726791]'\n- '[@DBLP:conf/icdar/ClausnerAP19]'\ntitle: 'Post-OCR Paragraph Recognition by Graph Convolutional Networks'\n---\n\nIntroduction\n============\n\nDocument image understanding is a task to recognize, structure, and understand the contents of document images, and is a key technology to digitally process and consume such images, which are ubiquitous and can be found in numerous applications. Document image understanding enables the conversion of such documents into a digital format" +"---\nabstract: 'This paper introduces the subgraph nomination inference task, in which example subgraphs of interest are used to query a network for similarly interesting subgraphs. This type of problem appears time and again in real world problems connected to, for example, user recommendation systems and structural retrieval tasks in social and biological/connectomic networks. We formally define the subgraph nomination framework with an emphasis on the notion of a user-in-the-loop in the subgraph nomination pipeline. In this setting, a user can provide additional post-nomination light supervision that can be incorporated into the retrieval task. After introducing and formalizing the retrieval task, we examine the nuanced effect that user-supervision can have on performance, both analytically and across real and simulated data examples.'\nauthor:\n- |\n Al-Fahad M.\u00a0Al-Qadhi$^1$, Carey E. Priebe$^2$,\\\n Hayden S. Helm$^3$, Vince Lyzinski$^1$\\\nbibliography:\n- 'biblio.bib'\ntitle: 'Subgraph nomination: Query by Example Subgraph Retrieval in Networks'\n---\n\nIntroduction\n============\n\nStated succinctly, the subgraph nomination problem is as follows: given a subgraph or subgraphs of interest in a network $G_1$, we seek to find heretofore unknown subgraphs of interest in $G_1$ or in a second network $G_2$. The subgraph nomination problem can be viewed as an amalgam of the" +"---\nabstract: 'This paper describes computer models of three interventions used for treating refractory pulmonary hypertension (RPH). These procedures create either an atrial septal defect, a ventricular septal defect, or, in the case of a Potts shunt, a patent ductus arteriosus. The aim in all three cases is to generate a right-to-left shunt, allowing for either pressure or volume unloading of the right side of the heart in the setting of right ventricular failure, while maintaining cardiac output. These shunts are created, however, at the expense of introducing de-oxygenated blood into the systemic circulation, thereby lowering the systemic arterial oxygen saturation. The models developed in this paper are based on compartmental descriptions of human hemodynamics and oxygen transport. An important parameter included in our models is the cross-sectional area of the surgically created defect. Numerical simulations are performed to compare different interventions and various shunt sizes and to assess their impact on hemodynamic variables and oxygen saturations. We also create a model for exercise and use it to study exercise tolerance in simulated pre-intervention and post-intervention RPH patients.'\nauthor:\n- Seong Woo Han\n- Charles Puelz\n- 'Craig G. Rusin'\n- |\n \\\n Daniel J. Penny\n- Ryan Coleman\n-" +"---\nabstract: 'Given a natural number $n\\geq3$ and two points $a$ and $b$ in the unit disk ${\\mathbb{D}}$ in the complex plane, it is known that there exists a unique elliptical disk having $a$ and $b$ as foci that can also be realized as the intersection of a collection of convex cyclic $n$-gons whose vertices fill the whole unit circle ${\\mathbb{T}}$. What is less clear is how to find a convenient formula or expression for such an elliptical disk. Our main results reveal how orthogonal polynomials on the unit circle provide a useful tool for finding such a formula for some values of $n$. The main idea is to realize the elliptical disk as the numerical range of a matrix and the problem reduces to finding the eigenvalues of that matrix.'\naddress:\n- 'Department of Mathematics, Baylor University, Waco TX, USA'\n- 'Department of Mathematics, Baylor University, Waco TX, USA, and Department of Mathematics, University of Almer\u00eda, Almer\u00eda, Spain'\n- 'Department of Mathematics, Baylor University, Waco TX, USA'\n- 'Department of Mathematics, Baylor University, Waco TX, USA'\nauthor:\n- Markus Hunziker\n- 'Andrei Mart\u00ednez-Finkelshtein'\n- Taylor Poe\n- Brian Simanek\ntitle: On foci of ellipses inscribed in cyclic polygons\n---" +"---\nabstract: 'In this work, we propose the use of a fully managed machine learning service, which utilizes active learning to directly build models from unstructured data. With this tool, business users can quickly and easily build machine learning models and then directly deploy them into a production ready hosted environment without much involvement from data scientists. Our approach leverages state-of-the-art text representation like OpenAI\u2019s GPT2 and a fast implementation of the active learning workflow that relies on a simple construction of incremental learning using linear models, thus providing a brisk and efficient labeling experience for the users. Experiments on both publicly available and real-life insurance datasets empirically show why our choices of simple and fast classification algorithms are ideal for the task at hand.'\nauthor:\n- |\n Teja Kanchinadam, Qian You, Keith Westpfahl, James Kim, Siva Gunda, Sebastian Seith, Glenn Fung\\\n American Family Insurance, Machine Learning Research Group\\\n {tkanchin, qyou, kwestpa, jjkim, sgunda, sseith, gfung}@amfam.com\\\nbibliography:\n- 'aaai20.bib'\ntitle: A Simple yet Brisk and Efficient Active Learning Platform for Text Classification\n---\n\nIntroduction\n============\n\nTopic classification and identification have remained a fundamental problem in industries especially when dealing with large corpora of unstructured text, which paved way for an" +"---\nabstract: 'The characterization and control of quantum effects in the performance of thermodynamic tasks may open new avenues for small thermal machines working in the nanoscale. We study the impact of coherence in the energy basis in the operation of a small thermal machine which can act either as a heat engine or as a refrigerator. We show that input coherence may enhance the machine performance and allow it to operate in otherwise forbidden regimes. Moreover, our results also indicate that, in some cases, coherence may also be detrimental, rendering optimization of particular models a crucial task for benefiting from coherence-induced enhancements.'\nauthor:\n- Kenza Hammam\n- Yassine Hassouni\n- Rosario Fazio\n- Gonzalo Manzano\nbibliography:\n- 'REFS.bib'\ntitle: Optimizing autonomous thermal machines powered by energetic coherence\n---\n\nIntroduction\n============\n\nThermodynamics was originally conceived in the XIX century to describe and eventually improve steam engines\u00a0[@Carnot; @Prigogine]. It has proven to be one of the most successful theories in physics, leading to a general understanding of the physical properties of macroscopic systems, with multidisciplinary applications. Recent decades have seen a growth of interest in the thermodynamics of microscopic systems\u00a0[@QTD1; @QTD2]. Progress in experimental techniques to manipulate quantum systems" +"---\naddress: 'Institute of Theoretical Physics, School of Physics, Dalian University of Technology, Dalian, 116024, P. R. China'\nauthor:\n- Minghui DU\n- Lixin XU\ntitle: 'How will our knowledge of short gamma-ray bursts affect the distance measurement of binary neutron stars?'\n---\n\n[How will our knowledge of short gamma-ray bursts affect the distance measurement of binary neutron stars?]{}\n\n[[lxxu@dlut.edu.cn]{}]{}\n\n[2]{}\n\nIntroduction\\[Introduction\\]\n============================\n\nEver since the first observation of gravitational wave (GW) emitted by binary neutron star (BNS), known as GW170817\u00a0[@PhysRevLett.119.161101], and the coincident short gamma-ray burst (SGRB) event GRB170817A\u00a0[@Goldstein:2017mmi], the method of using gravitational wave standard siren (GWSS)\u00a0[@Schutz] together with electromagnetic (EM) counterpart\u00a0[@GBM:2017lvd] to study cosmology has come into reality. The GWs from compact binaries provide direct measurements of the sources\u2019 luminosity distances. The redshift of a GW source can be independently obtained via extra information from EM counterparts\u00a0[@Holz:2005df] or with statistical methods, such as the dark siren\u00a0[@MacLeod:2007jd; @Chen:2017rfc; @Fishbach:2018gjp]. Among various EM counterparts, SGRB, which is usually associated with BNS, appears especially useful. By identifying the host galaxy of SGRB, it is possible to determine the redshift and sky position of a GW source at the same time. Moreover, the collimation property" +"---\nabstract: |\n We consider the usual causal structure $(I^+,J^+)$ on a spacetime, and a number of alternatives based on Minguzzi\u2019s $D^+$ and Sorkin and Woolgar\u2019s $K^+$, in the case where the spacetime metric is continuous, but not necessarily smooth. We compare the different causal structures based on three key properties, namely the validity of the push-up lemma, the openness of chronological futures, and the existence of limit causal curves. Recall that if the spacetime metric is smooth, $(I^+,J^+)$ satisfies all three properties, but that in the continuous case, the push-up lemma fails. Among the proposed alternative causal structures, there is one that satisfies push-up and open futures, and one that has open futures and limit curves. Furthermore, we show that spacetimes with continuous metrics do not, in general, admit a causal structure satisfying all three properties at once.\\\n *Key words:* low regularity, causality theory, push-up lemma, causal bubbles.\\\n *MSC-classification:* 53C50 (Primary) 83C75 (Secondary)\nauthor:\n- 'Leonardo Garc\u00eda-Heveling [^1]'\nbibliography:\n- 'k-relation.bib'\ntitle: Causality theory of spacetimes with continuous Lorentzian metrics revisited\n---\n\nIntroduction\n============\n\nThe study of spacetimes with metrics of low regularity is a topic of rising importance in Lorentzian geometry. The main motivation stems from the strong" +"---\nabstract: 'We discuss the problem of unique determination of the finite free discrete Schr\u00f6dinger operator from its spectrum, also known as Ambarzumian problem, with various boundary conditions, namely any real constant boundary condition at zero and Floquet boundary conditions of any angle. Then we prove the following Ambarzumian-type mixed inverse spectral problem: Diagonal entries except the first and second ones and a set of two consecutive eigenvalues uniquely determine the finite free discrete Schr\u00f6dinger operator.'\naddress:\n- 'Department of Mathematics, Texas A[&]{}M University, College Station, TX 77843, U.S.A.'\n- 'Department of Mathematics, Texas A[&]{}M University, College Station, TX 77843, U.S.A.'\n- 'Department of Mathematics, UC Santa Cruz, Santa Cruz, CA 95064, U.S.A.'\n- 'Department of Mathematics, Texas A[&]{}M University, College Station, TX 77843, U.S.A.'\n- 'Department of Mathematics, Texas A[&]{}M University, College Station, TX 77843, U.S.A.'\n- 'Department of Mathematics, Texas A[&]{}M University, College Station, TX 77843, U.S.A.'\nauthor:\n- Jerik Eakins\n- William Frendreiss\n- Burak Hatino\u011flu\n- Lucille Lamb\n- Sithija Manage\n- Alejandra Puente\nbibliography:\n- 'main.bib'\ntitle: 'Ambarzumian-type problems for discrete Schr\u00f6dinger operators\\'\n---\n\n**[Introduction]{}**\n====================\n\nThe Jacobi matrix is a three-diagonal matrix defined as $$\\begin{pmatrix}\nb_1 & a_1 & 0 & 0 & 0" +"---\nabstract: 'Graph Neural Networks (GNNs) have received considerable attention on graph-structured data learning for a wide variety of tasks. The well-designed propagation mechanism which has been demonstrated effective is the most fundamental part of GNNs. Although most of GNNs basically follow a message passing manner, litter effort has been made to discover and analyze their essential relations. In this paper, we establish a surprising connection between different propagation mechanisms with a unified optimization problem, showing that despite the proliferation of various GNNs, in fact, their proposed propagation mechanisms are the optimal solution optimizing a feature fitting function over a wide class of graph kernels with a graph regularization term. Our proposed unified optimization framework, summarizing the commonalities between several of the most representative GNNs, not only provides a macroscopic view on surveying the relations between different GNNs, but also further opens up new opportunities for flexibly designing new GNNs. With the proposed framework, we discover that existing works usually utilize na\u00efve graph convolutional kernels for feature fitting function, and we further develop two novel objective functions considering adjustable graph kernels showing low-pass or high-pass filtering capabilities respectively. Moreover, we provide the convergence proofs and expressive power comparisons for the" +"---\nabstract: 'Popular social media networks provide the perfect environment to study the opinions and attitudes expressed by users. While interactions in social media such as Twitter occur in many natural languages, research on stance detection (the position or attitude expressed with respect to a specific topic) within the Natural Language Processing field has largely been done for English. Although some efforts have recently been made to develop annotated data in other languages, there is a telling lack of resources to facilitate multilingual and crosslingual research on stance detection. This is partially due to the fact that manually annotating a corpus of social media texts is a difficult, slow and costly process. Furthermore, as stance is a highly domain- and topic-specific phenomenon, the need for annotated data is specially demanding. As a result, most of the manually labeled resources are hindered by their relatively small size and skewed class distribution. This paper presents a method to obtain multilingual datasets for stance detection in Twitter. Instead of manually annotating on a per tweet basis, we leverage user-based information to semi-automatically label large amounts of tweets. Empirical monolingual and cross-lingual experimentation and qualitative analysis show that our method helps to overcome the" +"---\nabstract: 'We prove a dimension-free $L^p(\\Omega)\\times L^q(\\Omega)\\times L^r(\\Omega)\\rightarrow L^1(\\Omega\\times (0,\\infty))$ embedding for triples of elliptic operators in divergence form with complex coefficients and subject to mixed boundary conditions on $\\Omega$, and for triples of exponents $p,q,r\\in(1,\\infty)$ mutually related by the identity $1/p+1/q+1/r=1$. Here $\\Omega$ is allowed to be an arbitrary open subset of ${\\mathbb{R}}^d$. Our assumptions involving the exponents and coefficient matrices are expressed in terms of a condition known as $p$-ellipticity. The proof utilizes the method of Bellman functions and heat flows. As a corollary, we give applications to (i) paraproducts and (ii) square functions associated with the corresponding operator semigroups, moreover, we prove (iii) inequalities of Kato\u2013Ponce type for elliptic operators with complex coefficients. All the above results are the first of their kind for elliptic divergence-form operators with complex coefficients on arbitrary open sets. Furthermore, the approach to (ii),(iii) through trilinear embeddings seems to be new.'\naddress:\n- 'Andrea Carbonaro, Universit\u00e0 degli Studi di Genova, Dipartimento di Matematica, Via Dodecaneso 35, 16146 Genova, Italy'\n- 'Oliver Dragi\u010devi\u0107, Department of Mathematics, Faculty of Mathematics and Physics, University of Ljubljana, Jadranska 19, SI-1000 Ljubljana, Slovenia, and Institute of Mathematics, Physics and Mechanics, Jadranska 19, SI-1000 Ljubljana, Slovenia'\n-" +"---\nabstract: 'When trained as generative models, Deep Learning algorithms have shown exceptional performance on tasks involving high dimensional data such as image denoising and super-resolution. In an increasingly connected world dominated by mobile and edge devices, there is surging demand for these algorithms to run locally on embedded platforms. FPGAs, by virtue of their reprogrammability and low-power characteristics, are ideal candidates for these edge computing applications. As such, we design a spatio-temporally parallelized hardware architecture capable of accelerating a deconvolution algorithm optimized for power-efficient inference on a resource-limited FPGA. We propose this FPGA-based accelerator to be used for Deconvolutional Neural Network (DCNN) inference in low-power edge computing applications. To this end, we develop methods that systematically exploit [micro-architectural innovations, design space exploration, and statistical analysis]{}. Using a Xilinx PYNQ-Z2 FPGA, we leverage our architecture to accelerate inference for two DCNNs trained on the MNIST and CelebA datasets using the Wasserstein GAN framework. On these networks, our FPGA design achieves a higher throughput to power ratio with lower run-to-run variation when compared to the NVIDIA Jetson TX1 edge computing GPU.'\nauthor:\n- Ian Colbert\n- Jake Daly\n- 'Ken Kreutz-Delgado'\n- Srinjoy Das\nbibliography:\n- 'citations.bib'\ntitle: 'A Competitive Edge:" +"---\nabstract: 'Wireless communication has achieved great success in the past several decades. The challenge is of improving bandwidth with limited spectrum and power consumption, which however has gradually become a bottleneck with evolution going on. The intrinsic problem is that communication is modeled as a message transportation from sender to receiver and pursues for an exact message replication in Shannon\u2019s information theory, which certainly leads to large bandwidth and power requirements with data explosion. However, the goal for communication among intelligent agents, entities with intelligence including humans, is to understand the meaning or semantics underlying data, not an accurate recovery of the transmitted messages. The separate first transmission and then understanding is a waste on bandwidth. In this article, we deploy semantics to solve the spectrum and power bottleneck and propose a first understanding and then transmission framework with high semantic fidelity. We first give a brief introduction of semantics covering the definition and properties to show the insights and scope of this paper. Then the proposed communication towards semantic fidelity framework is introduced, which takes the above mentioned properties into account to further improve efficiency. Specially, a semantic transformation is introduced to transform the input into semantic symbols." +"---\nabstract: 'Ionic microgel particles are intriguing systems in which the properties of thermo-responsive polymeric colloids are enriched by the presence of charged groups. In order to rationalize their properties and predict the behaviour of microgel suspensions, it is necessary to develop a coarse-graining strategy that starts from the accurate modelling of single particles. Here, we provide a numerical advancement of a recently-introduced model for charged co-polymerized microgels by improving the treatment of ionic groups in the polymer network. We investigate the thermoresponsive properties of the particles, in particular their swelling behaviour and structure, finding that, when charged groups are considered to be hydrophilic at all temperatures, highly charged microgels do not achieve a fully collapsed state, in favorable comparison to experiments. In addition, we explicitly include the solvent in the description and put forward a mapping between the solvophobic potential in the absence of the solvent and the monomer-solvent interactions in its presence, which is found to work very accurately for any charge fraction of the microgel. Our work paves the way for comparing single-particle properties and swelling behaviour of ionic microgels to experiments and to tackle the study of these charged soft particles at a liquid-liquid interface.'\naddress:" +"---\nabstract: 'Drive towards improved performance of machine learning models has led to the creation of complex features representing a database of condensed matter systems. The complex features, however, do not offer an intuitive explanation on which physical attributes do improve the performance. The effect of the database on the performance of the trained model is often neglected. In this work we seek to understand in depth the effect that the choice of features and the properties of the database have on a machine learning application. In our experiments, we consider the complex phase space of carbon as a test case, for which we use a set of simple, human understandable and cheaply computable features for the aim of predicting the total energy of the crystal structure. Our study shows that (i) the performance of the machine learning model varies depending on the set of features and the database, (ii) is not transferable to every structure in the phase space and (iii) depends on how well structures are represented in the database.'\nauthor:\n- 'Franz M. Rohrhofer'\n- Santanu Saha\n- Simone Di Cataldo\n- 'Bernhard C. Geiger'\n- Wolfgang von der Linden\n- Lilia Boeri\nbibliography:\n- 'main.bib'\ntitle:" +"---\nabstract: |\n In recent years, within the dairy sector, animal diet and management practices have been receiving increased attention, in particular examining the impact of pasture-based feeding strategies on the composition and quality of milk and dairy products, in line with the increased prevalence of premium *grass-fed* dairy products appearing on market shelves. To date, there are limited testing methods available for the verification of *grass-fed* dairy and as a consequence these products are susceptible to food fraud and adulteration. Therefore, with this in mind, enhanced statistical tools studying potential differences among milk samples coming from animals on different feeding systems are required, thus providing increased security around the authenticity of the products.\n\n Infrared spectroscopy techniques are widely used to collect data on milk samples and to predict milk related traits and characteristics. While these data are routinely used to predict the composition of the macro components of milk, each spectrum also provides a reservoir of unharnessed information about the sample. The accumulation and subsequent interpretation of these data present a number of challenges due to their high-dimensionality and the relationships amongst the spectral variables.\n\n In this work, directly motivated by a dairy application, we propose a modification of" +"---\nabstract: 'Keyword extraction is the task of identifying words (or multi-word expressions) that best describe a given document and serve in news portals to link articles of similar topics. In this work, we develop and evaluate our methods on four novel data sets covering less-represented, morphologically-rich languages in European news media industry (Croatian, Estonian, Latvian, and Russian). First, we perform evaluation of two supervised neural transformer-based methods, Transformer-based Neural Tagger for Keyword Identification (TNT-KID) and Bidirectional Encoder Representations from Transformers (BERT) with an additional Bidirectional Long Short-Term Memory Conditional Random Fields (BiLSTM CRF) classification head, and compare them to a baseline Term Frequency - Inverse Document Frequency (TF-IDF) based unsupervised approach. Next, we show that by combining the keywords retrieved by both neural transformer-based methods and extending the final set of keywords with an unsupervised TF-IDF based technique, we can drastically improve the recall of the system, making it appropriate for usage as a recommendation system in the media house environment.'\nauthor:\n- |\n Boshko Koloski\\\n Jo\u017eef Stefan Institute\\\n Jo\u017eef Stefan IPS\\\n Jamova 39, Ljubljana\\\n `boshko.koloski@ijs.si`\\\n Senja Pollak\\\n Jo\u017eef Stefan Institute\\\n Jamova 39, Ljubljana\\\n `senja.pollak@ijs.si`\\\n Bla\u017e \u0160krlj\\\n Jo\u017eef Stefan Institute\\\n Jo\u017eef Stefan IPS\\\n Jamova 39, Ljubljana\\\n `blaz.skrlj@ijs.si`\\\n Matej Martinc\\" +"---\nabstract: 'In this article, we give the most general form of the quaternions algebra depending on 3-parameters. We define 3-parameter generalized quaternions (3PGQs) and study on various properties and applications. Firstly we present the definiton, the multiplication table and other properties of 3PGQs such as addition-substraction, multiplication and multiplication by scalar operations, unit and inverse elements, conjugate and norm. We give matrix representation and Hamilton operators for 3PGQs.We get polar represenation, De Moivre\u2019s and Euler\u2019s formulas with the matrix representations for 3PGQs. Besides, we give relations among the powers of the matrices associated with 3PGQs. Finally, Lie group and Lie algebra are studied and their matrix representations are shown. Also the Lie multiplication and the killing bilinear form are given.'\naddress:\n- |\n Department of Mathematics, Institute of Science and Technology\\\n Kastamonu University\\\n Kastamonu, 37150\\\n Turkey\n- |\n Department of Mathematics, Faculty of Arts and Science\\\n Kastamonu University\\\n Kastamonu, 37150\\\n Turkey\nauthor:\n- 'Tuncay Deniz \u015eent\u00fcrk\\*'\n- Zafer \u00dcnal\ndate: 'October 25, 2019'\ntitle: '3-Parameter Generalized Quaternions'\n---\n\n[^1]\n\nIntroduction\n============\n\nIrish mathematician Sir William Rowan Hamilton started working on the complex numbers in 1830. Hamilton wanted to generalize these numbers. Firstly, he wanted to express these numbers as" +"---\nabstract: 'Given its ability to control and manipulate wireless environments, reconfigurable intelligent surface (RIS), also known as intelligent reflecting surface (IRS), has emerged as a key enabler technology for the six-generation (6G) cellular networks. In the meantime, vehicular environment radio propagation is negatively influenced by a large set of objects that cause transmission distortion such as high buildings. Therefore, this work is devoted to explore the area of RIS technology integration with vehicular communications while considering the dynamic nature of such communication environment. Specifically, we provide a system model where RoadSide Unit (RSU) leverages RIS to provide indirect wireless transmissions to disconnected areas, known as dark zones. Dark zones are spots within RSU coverage where the communication links are blocked due to the existence of blockages. In details, a discrete RIS is utilized to provide communication links between the RSU and the vehicles passing through out-of-service zones. Therefore, the joint problem of RSU resource scheduling and RIS passive beamforming or phase-shift matrix is formulated as an optimization problem with the objective of maximizing the minimum average bit rate. The formulated problem is mixed integer non-convex program which is difficult to be solved and does not account for the uncertain" +"---\nabstract: 'Simultaneous wireless information and power transfer (SWIPT) technique is a popular strategy to convey both information and RF energy for harvesting at receivers. In this regard, we consider a two-way relay system with multiple users and a multi-antenna relay employing SWIPT strategy, where splitting the received signal leads to a rate-energy trade-off. In literature, the works on transceiver design have been studied using computationally intensive and suboptimal convex relaxation based schemes. In this paper, we study the balanced precoder design using chordal distance (CD) decomposition, which incurs much lower complexity, and is flexible to dynamic energy requirements. It is analyzed that given a non-negative value of CD, the achieved harvested energy for the proposed balanced precoder is higher than that for the perfect interference alignment (IA) precoder. The corresponding loss in sum rates is also analyzed via an upper bound. Simulation results add that the IA schemes based on mean-squared error are better suited for the SWIPT maximization than the subspace alignment-based methods.'\nauthor:\n- 'Navneet Garg, Junkai Zhang, and Tharmalingam Ratnarajah, [^1]'\nbibliography:\n- 'one.bib'\ntitle: 'Rate-Energy Balanced Precoding Design for SWIPT based Two-Way Relay Systems'\n---\n\nSimultaneous wireless information and power transfer (SWIPT); two-way relay; rate-energy" +"---\nabstract: 'Spin waves modes in magnetic waveguides with width down to 320 nm have been studied by electrical propagating spin-wave spectroscopy and micromagnetic simulations for both longitudinal and transverse magnetic bias fields. For longitudinal bias fields, a 1.3 GHz wide spin-wave band was observed in agreement with analytical dispersion relations for uniform magnetization. However, transverse bias field led to several distinct bands, corresponding to different quantized width modes, with both negative and positive slopes. Micromagnetic simulations showed that, in this geometry, the magnetization was nonuniform and tilted due to the strong shape anisotropy of the waveguides. Simulations of the quantized spin-wave modes in such nonuniformly magnetized waveguides resulted in spin wave dispersion relations in good agreement with the experiments.'\nauthor:\n- Giacomo Talmelli\n- Daniele Narducci\n- Frederic Vanderveken\n- Marc Heyns\n- Fernanda Irrera\n- Inge Asselberghs\n- 'Iuliana P. Radu'\n- Christoph Adelmann\n- Florin Ciubotaru\ntitle: 'Electrical spin-wave spectroscopy in nanoscale waveguides with nonuniform magnetization'\n---\n\nSpin waves are collective excitations of the magnetization in ferromagnetic materials with typical frequencies in the GHz and wavelengths in the nm to m ranges. Due to their low intrinsic energies, they have recently received increasing interest for ultralow power" +"---\nbibliography:\n- 'ref.bib'\ntitle: |\n Dimensional Reduction\\\n and (Anti) de Sitter Bounds\n---\n\n[by1BERKELEYnumberedinst[unnumberedinstPhysics Department, University of California, Berkeley CA 94720 USA]{}unnumberedinst[unnumberedinstPhysics Department, University of California, Berkeley CA 94720 USA]{}]{}\n\n[authors[Tom Rudelius[${}^{\\BERKELEY}$instused[yes]{}]{}[^1]]{}]{}\n\n[abstract[ Dimensional reduction has proven to be a surprisingly powerful tool for delineating the boundary between the string landscape and the swampland. Bounds from the Weak Gravity Conjecture and the Repulsive Force Conjecture, for instance, are exactly preserved under dimensional reduction. Motivated by its success in these cases, we apply a similar dimensional reduction analysis to bounds on the gradient of the scalar field potential $V$ and the mass scale $m$ of a tower of light particles in terms of the cosmological constant $\\Lambda$, which ideally may pin down ambiguous $O(1)$ constants appearing in the de Sitter Conjecture and the (Anti) de Sitter Distance Conjecture, respectively. We find that this analysis distinguishes the bounds $|\\nabla V|/V \\geq \\sqrt{4/(d-2)}$, $m \\lesssim |\\Lambda|^{1/2}$, and $m \\lesssim |\\Lambda|^{1/d}$ in $d$-dimensional Planck units. The first of these bounds precludes accelerated expansion of the universe in Einstein-dilaton gravity and is almost certainly violated in our universe, though it may apply in asymptotic limits of scalar field space. The second bound cannot be" +"---\nabstract: 'This paper presents a novel method for obtaining the probability wave of breaking ($P_b$) of deep water, dominant wind-sea waves (that is, waves made of the energy within $\\pm$30% of the peak wave frequency) derived from Gaussian wave field theory. For a given input wave spectrum we demonstrate how it is possible to derive a joint probability density function between wave phase speed ($c$) and horizontal orbital velocity at wave crest ($u$) from which a model for $P_b$ can be obtained. A non-linear kinematic wave breaking criterion consistent with the Gaussian framework is further proposed. Our model would allow, therefore, for application of the classical wave breaking criterion (that is, wave breaking occurs if $u/c > 1$) in spectral wave models which, to the authors\u2019 knowledge, has not been done to date. Our results show that the proposed theoretical model has errors in the same order of magnitude as six other historical models when assessed using three field datasets. With optimization of the proposed model\u2019s single free parameter, it can become the best performing model for specific datasets. Although our results are promising, additional, more complete wave breaking datasets collected in the field are needed to comprehensively assess" +"---\nabstract: 'Bluetooth has become critical as many IoT devices are arriving in the market. Most of the current literature focusing on Bluetooth simulation concentrates on the network protocols\u2019 performances and completely neglects the privacy protection recommendations introduced in the BLE standard. Indeed, privacy protection is one of the main issues handled in the Bluetooth standard. For instance, the current standard forces devices to change the identifier they embed within the public and private packets, known as MAC address randomization. Although randomizing MAC addresses is intended to preserve device privacy, recent literature shows many challenges that are still present. One of them is the correlation between the public packets and the emitters. Unfortunately, existing evaluation tools such as NS-3 are not designed to reproduce this Bluetooth standard\u2019s essential functionality. This makes it impossible to test solutions for different device-fingerprinting strategies as there is a lack of *ground truth* for large-scale scenarios with the majority of current BLE devices implementing MAC address randomization. In this paper, we first introduce a solution of standard-compliant MAC address randomization in the NS-3 framework, capable of emulating any real BLE device in the simulation and generating real-world Bluetooth traces. In addition, since the simulation run-time" +"---\nauthor:\n- 'Subramanya Hegde[^1],'\n- 'Dileep P. Jatkar'\nbibliography:\n- 'mms-defects.bib'\ntitle: Defect Partition Function from TDLs in Commutant Pairs\n---\n\n[ !! @toks= @toks= ]{} [ !! @toks= @toks= ]{} [@counter>0@toks=@toks=]{} [@counter>0@toks=@toks=]{}\n\n[abstract[We study topological defect lines in two character rational conformal field theories. Among them one set of two character theories are commutant pairs in $E_{8,1}$ conformal field theory. Using these defect lines we construct defect partition function in the $E_8$ theory. We find that the defects preserve only a part of the $E_8$ current algebra symmetry. We also determine the defect partition function in $c=24$ CFTs using these defects lines of 2 character theories and we find that, with appropriate choice of commutant pairs, these defects preserve all current algebra symmetries of c = 24 CFTs.]{}]{}\n\nIntroduction and summary {#sec:intro}\n========================\n\nTwo-dimensional conformal field theories (2DCFT) have played a pivotal role in understanding a variety of problems in theoretical physics, ranging from string theory [@DiFrancesco:1997nk] to mesoscopic physics [@Oshikawa:1996dj] to quantum information [@pachos2012]. All these applications, in turn, have helped deepen our understanding of 2DCFT. However, classification and study of 2DCFTs is an interesting problem in its own right [@DiFrancesco:1997nk]. A programme of classifying 2D rational" +"---\nabstract: |\n The maturing of blockchain technology leads to heterogeneity, where multiple solutions specialize in a particular use case. While the development of different blockchain networks shows great potential for blockchains, the isolated networks have led to data and asset silos, limiting the applications of this technology. Blockchain interoperability solutions are essential to enable distributed ledgers to reach their full potential. Such solutions allow blockchains to support asset and data transfer, resulting in the development of innovative applications.\n\n This paper proposes a novel blockchain interoperability solution for permissioned blockchains based on the publish/subscribe architecture. We implemented a prototype of this platform to show the feasibility of our design. We evaluate our solution by implementing examples of the different publisher and subscriber networks, such as Hyperledger Besu, which is an Ethereum client, and two different versions of Hyperledger Fabric. We present a performance analysis of the whole network that indicates its limits and bottlenecks. Finally, we discuss the extensibility and scalability of the platform in different scenarios. Our evaluation shows that our system can handle a throughput in the order of the hundreds of transactions per second.\nauthor:\n- \nbibliography:\n- 'bibliography.bib'\ntitle: 'A Pub-Sub Architecture to Promote Blockchain Interoperability" +"---\nabstract: 'We designed a superposition calculus for a clausal fragment of extensional polymorphic higher-order logic that includes anonymous functions but excludes Booleans. The inference rules work on $\\beta\\eta$-equivalence classes of $\\lambda$-terms and rely on higher-order unification to achieve refutational completeness. We implemented the calculus in the Zipperposition prover and evaluated it on TPTP and Isabelle benchmarks. The results suggest that superposition is a suitable basis for higher-order reasoning.'\nauthor:\n- Alexander Bentkamp\n- Jasmin Blanchette\n- Sophie\u00a0Tourret\n- Petar\u00a0Vukmirovi\u0107\n- Uwe\u00a0Waldmann\nbibliography:\n- 'ms.bib'\ndate: 'Received: date / Accepted: date'\ntitle: Superposition with Lambdas\n---\n\nIntroduction {#sec:introduction}\n============\n\nSuperposition [@bachmair-ganzinger-1994] is widely regarded as the calculus par excellence for reasoning about first-order logic with equality. To increase automation in proof assistants and other verification tools based on higher-order formalisms, we propose to generalize superposition to an extensional, polymorphic, clausal version of higher-order logic (also called simple type theory). Our ambition is to achieve a *graceful* extension, which coincides with standard superposition on first-order problems and smoothly scales up to arbitrary higher-order problems.\n\nBentkamp, Blanchette, Cruanes, and Waldmann\u00a0[@bentkamp-et-al-2018] designed a family of superposition-like calculi for a $\\lambda$-free clausal fragment of higher-order logic, with currying and applied" +"---\nabstract: |\n We study the expression rates of deep neural networks (DNNs for short) for option prices written on baskets of $d$ risky assets, whose log-returns are modelled by a multivariate L\u00e9vy process with general correlation structure of jumps. We establish sufficient conditions on the characteristic triplet of the L\u00e9vy process $X$ that ensure $\\varepsilon$ error of DNN expressed option prices with DNNs of size that grows polynomially with respect to ${{\\mathcal O}}(\\varepsilon^{-1})$, and with constants implied in ${{\\mathcal O}}(\\cdot)$ which grow polynomially in $d$, thereby overcoming the curse of dimensionality (CoD) and justifying the use of DNNs in financial modelling of large baskets in markets with jumps.\n\n In addition, we exploit parabolic smoothing of Kolmogorov partial integrodifferential equations for certain multivariate L\u00e9vy processes to present alternative architectures of ReLU DNNs that provide $\\varepsilon$ expression error in DNN size ${{\\mathcal O}}(|\\log(\\varepsilon)|^a)$ with exponent $a \\sim d$, however, with constants implied in ${{\\mathcal O}}(\\cdot)$ growing exponentially with respect to $d$. Under stronger, dimension-uniform non-degeneracy conditions on the L\u00e9vy symbol, we obtain algebraic expression rates of option prices in exponential L\u00e9vy models which are free from the curse of dimensionality. In this case the ReLU DNN expression rates of prices depend" +"---\nauthor:\n- 'M. Zitelli'\n- 'M. Ferraro'\n- 'F. Mangini'\n- 'S. Wabnitz'\nbibliography:\n- 'biblio.bib'\ntitle: Singlemode spatiotemporal soliton attractor in multimode GRIN fibers\n---\n\nIntroduction\n============\n\nOptical solitons in fibers have been extensively and successfully studied over the past fifty years, leading to significant progress in long-distance optical communications and mode-locked lasers [@Hasegawabook; @ZaWabook]. Although nearly all of these investigations involved the generation and propagation of singlemode fiber solitons, optical solitons can be supported by multimode optical fibers (MMFs) as well [@Hasegawa:80; @Crosignani:81; @Crosignani:82; @doi:10.1063/1.5119434].\n\nInterest in MMFs has been motivated by their potential for increasing the transmission capacity of long-distance optical links via the technique of mode-division-multiplexing (MDM), exploiting the multiple transverse modes of the fiber as information carriers [@Richardson]. In this context, it has been predicted that, in the presence of random mode coupling and nonlinearity, MMFs can support the stable propagation of Manakov solitons, leading to a nonlinear compensation of modal dispersion [@Mecozzi:12; @2012arXiv1207.6506M]. The possibility of overcoming modal dispersion is also of great interest for high-speed local-area networks, where MMFs are extensively employed [@Agrawalbook]. In addition, there is a significant industrial interest in the use of large-area fibers for up-scaling the power of" +"---\nabstract: 'We consider a man-in-the-middle attack on two-way quantum key distribution ping-pong and LM05 protocols in which an eavesdropper copies all messages in the message mode, while being undetectable in the mode. Under the attack there is therefore no disturbance in the message mode and the mutual information between the sender and the receiver is always constant and equal to one and messages copied by the eavesdropper are always genuine. An attack can only be detected in the control mode but the level of detection at which the protocol should be aborted is not defined. We examine steps of the protocol to evaluate its security and find that the protocol should be redesigned. We also compare it with the security of a one-way asymmetric BB84-like protocol in which one basis serves as the message mode and the other as the control mode but which does have the level of detection at which the protocol should be aborted defined.'\nauthor:\n- 'Mladen Pavi[\u010d]{}i[\u0107]{}'\ntitle: 'How Secure are Two-Way Ping-Pong and LM05 QKD Protocols under a Man-in-the-Middle Attack?'\n---\n\n0=1=1 by 40pt\n\nIntroduction {#intro}\n============\n\nQuantum cryptography, in particular quantum key distribution (QKD) protocols, offers us, in contrast to the classical" +"---\nabstract: 'Pareto solutions represent optimal frontiers for jointly optimizing multiple competing objective functions over the feasible set of solutions satisfying imposed constraints. Extracting a Pareto front is computationally challenging today with limited scalability and solution accuracy. Popular generic scalarization approaches do not always converge to a global optimum and can only return one solution point per run. Consequently, multiple runs of a scalarization problem are required to guarantee a Pareto front, where all instances must converge to their respective global optima. We propose a robust, low cost hybrid Pareto neural-filter (HNPF) optimization approach that is accurate and scales (compute space and time) with data dimensions, and the number of functions and constraints. A first-stage neural network first efficiently extracts a [*weak*]{} Pareto front, using Fritz-John conditions as the discriminator, with no assumptions of convexity on the objectives or constraints. A second-stage, low-cost Pareto filter then extracts the [*strong*]{} Pareto optimal subset from the [*weak*]{} front. Fritz-John conditions provide strong theoretical bounds on approximation error between the true and the network extracted [*weak*]{} Pareto front. Numerical experiments demonstrates the accuracy and efficiency of our approach.'\nauthor:\n- '**Gurpreet Singh** ^^'\n- '**Soumyajit Gupta** ^^'\n- '**Matthew Lease**'\n- '**Clint Dawson**'" +"---\nabstract: 'The matter density field at $z\\sim 6$ is very challenging to probe. One of the traditional probes of the low density IGM that works successfully at lower redshifts is the Lyman-alpha forest in quasar spectra. However, at the end of reionization, the residual neutral hydrogen usually creates saturated absorption, thus much of the information about the gas density is lost. Luckily, in a quasar proximity zone, the ionizing radiation is exceptionally intense, thus creating a large region with non-zero transmitted flux. In this study we use the synthetic spectra from simulations to investigate how to recover the density fluctuations inside the quasar proximity zones. We show that, under ideal conditions, the density can be recovered accurately with a small scatter. We also discuss how systematics such as the quasar continuum fitting and reionization models affect the results. This study shows that by analyzing the absorption features inside quasar proximity zones we can potentially constrain quasar properties and the environments they reside in.'\nauthor:\n- Huanqing Chen\n- 'Nickolay Y.\u00a0Gnedin'\nbibliography:\n- 'main.bib'\ntitle: 'Recovering Density Fields inside Quasar Proximity Zones at $z\\sim 6$'\n---\n\nIntroduction\n============\n\nThe intergalactic medium (IGM) contains most of baryons of the Universe" +"---\nabstract: |\n With the rise of deep learning, there has been increased interest in using neural networks for histopathology image analysis, a field that investigates the properties of biopsy or resected specimens traditionally manually examined under a microscope by pathologists. However, challenges such as limited data, costly annotation, and processing high-resolution and variable-size images make it difficult to quickly iterate over model designs.\n\n Throughout scientific history, many significant research directions have leveraged small-scale experimental setups as **petri dishes** to efficiently evaluate exploratory ideas. In this paper, we introduce a **m**inimalist **h**istopathology image analysis dataset (**MHIST**), an analogous petri dish for histopathology image analysis. MHIST is a binary classification dataset of 3,152 fixed-size images of colorectal polyps, each with a gold-standard label determined by the majority vote of seven board-certified gastrointestinal pathologists and annotator agreement level. MHIST occupies less than 400 MB of disk space, and a ResNet-18 baseline can be trained to convergence on MHIST in just 6 minutes using 3.5 GB of memory on a NVIDIA RTX 3090. As example use cases, we use MHIST to study natural questions such as how dataset size, network depth, transfer learning, and high-disagreement examples affect model performance.\n\n By introducing MHIST, we" +"---\nabstract: 'We give an overview of recent developments in the modeling of radiowave propagation, based on machine learning algorithms. We identify the input and output specification and the architecture of the model as the main challenges associated with machine learning-driven propagation models. Relevant papers are discussed and categorized based on their approach to each of these challenges. Emphasis is given on presenting the prospects and open problems in this promising and rapidly evolving area.'\nauthor:\n- 'Aristeidis Seretis, Costas D. Sarris'\nbibliography:\n- 'IEEEabrv.bib'\n- 'main.bib'\ndate: December 2019\ntitle: An Overview of Machine Learning Techniques for Radiowave Propagation Modeling\n---\n\nArtificial Intelligence, Machine learning, Neural Networks, Radiowave Propagation, Propagation Losses\n\nIntroduction {#sec_1}\n============\n\nFor the intelligent planning and efficient management of any wireless communication system, channel propagation models are indispensable [@catedra]. As a growing number of wireless services with high performance demands is offered, the need for new propagation models becomes more urgent. Safety-critical, high-throughput and low-latency are just some of the required characteristics needed in current and future wireless systems.\n\nOver the years, various empirical propagation models, such as Okumura-Hata or Walfish-Bertoni among others, have been created [@molisch; @rappaport]. Empirical models are measurement-driven, formulated after fitting measurements" +"---\nabstract: 'In this paper, we study how to optimize the federated edge learning (FEEL) in UAV-enabled Internet of things (IoT) for B5G/6G networks, from a deep reinforcement learning (DRL) approach. The federated learning is an effective framework to train a shared model between decentralized edge devices or servers without exchanging raw data, which can help protect data privacy. In UAV-enabled IoT networks, latency and energy consumption are two important metrics limiting the performance of FEEL. Although most of existing works have studied how to reduce the latency and improve the energy efficiency, few works have investigated the impact of limited batteries at the devices on the FEEL. Motivated by this, we study the battery-constrained FEEL, where the UAVs can adjust their operating CPU-frequency to prolong the battery life and avoid withdrawing from federated learning training untimely. We optimize the system by jointly allocating the computational resource and wireless bandwidth in time-varying environments. To solve this optimization problem, we employ a deep deterministic policy gradient (DDPG) based strategy, where a linear combination of latency and energy consumption is used to evaluate the system cost. Simulation results are finally demonstrated to show that the proposed strategy outperforms the conventional ones. In" +"---\nabstract: 'Among the models of disordered conduction and localization, models with $N$ orbitals per site are attractive both for their mathematical tractability and for their physical realization in coupled disordered grains. However Wegner proved that there is no Anderson transition and no localized phase in the $N \\rightarrow \\infty$ limit, if the hopping constant $K$ is kept fixed. [@PhysRevB.19.783; @Khorunzhy92] Here we show that the localized phase is preserved in a different limit where $N$ is taken to infinity and the hopping $K$ is simultaneously adjusted to keep $N \\, K$ constant. We support this conclusion with two arguments. The first is numerical computations of the localization length showing that in the $N \\rightarrow \\infty$ limit the site-diagonal-disorder model possesses a localized phase if $N\\,K$ is kept constant, but does not possess that phase if $K$ is fixed. The second argument is a detailed analysis of the energy and length scales in a functional integral representation of the gauge invariant model. The analysis shows that in the $K$ fixed limit the functional integral\u2019s spins do not exhibit long distance fluctuations, i.e. such fluctuations are massive and therefore decay exponentially, which signals conduction. In contrast the $N\\,K$ fixed limit preserves" +"---\nabstract: |\n ACO2163 is one of the hottest (mean $kT=12-15.5$ keV) and extremely X-ray overluminous merging galaxy clusters which is located at $z=0.203$. The cluster hosts one of the largest giant radio halos which are observed in most of the merging clusters, and a candidate radio relic. Recently, three merger shock fronts were detected in this cluster, explaining its extreme temperature and complex structure. Furthermore, previous [*XMM-Newton*]{} and [*Chandra*]{} observations hinted at the presence of a shock front that is associated with the gas \u2018bullet\u2019 crossing the main cluster in the west-ward direction, and which heated the intra-cluster medium, leading to adiabatic compression of the gas behind the \u2019bullet\u2019. The goal of this paper is to report on the detection of this shock front as revealed by the temperature discontinuity in the X-ray XMM-Newton image, and the edge in the Very Large Array (VLA) radio image. We also report on the detection of a relic source in the north-eastern region of the radio halo in the KAT-7 data, confirming the presence of an extended relic in this cluster.\\\n The brightness edge in the X-rays corresponds to a shock front with a Mach number $M= 2.2\\pm0.3$, at a distance of" +"---\nauthor:\n- 'GRAVITY Collaboration[^1]: F. Eupen'\n- 'L. Labadie'\n- 'R. Grellmann'\n- 'K. Perraut'\n- 'W. Brandner'\n- 'G. Duch\u00eane'\n- 'R. K\u00f6hler'\n- 'J. Sanchez-Bermudez'\n- 'R. Garcia Lopez'\n- 'A. Caratti o Garatti'\n- 'M. Benisty'\n- 'C. Dougados'\n- 'P. Garcia'\n- 'L. Klarmann'\n- 'A. Amorim'\n- 'M. Baub\u00f6ck'\n- 'J.P. Berger'\n- 'P. Caselli'\n- 'Y. Cl\u00e9net'\n- 'V. Coud\u00e9 du Foresto'\n- 'P.T. de Zeeuw'\n- 'A. Drescher'\n- 'G. Duvert'\n- 'A. Eckart'\n- 'F. Eisenhauer'\n- 'M. Filho'\n- 'V. Ganci'\n- 'F. Gao'\n- 'E. Gendron'\n- 'R. Genzel'\n- 'S. Gillessen'\n- 'G. Heissel'\n- 'Th. Henning'\n- 'S. Hippler'\n- 'M. Horrobin'\n- 'Z. Hubert'\n- 'A. Jim\u00e9nez-Rosales'\n- 'L. Jocou'\n- 'P. Kervella'\n- 'S. Lacour'\n- 'V. Lapeyr\u00e8re'\n- 'J.B. Le Bouquin'\n- 'P. L\u00e9na'\n- 'T. Ott'\n- 'T. Paumard'\n- 'G. Perrin'\n- 'O. Pfuhl'\n- 'G. Rodr\u00edguez-Coira'\n- 'G. Rousset'\n- 'S. Scheithauer'\n- 'J. Shangguan'\n- 'T. Shimizu'\n- 'J. Stadler'\n- 'O. Straub'\n- 'C. Straubmeier'\n- 'E. Sturm'\n- 'E. van Dishoeck'\n- 'F. Vincent'\n- 'S.D. von Fellenberg'\n- 'F. Widmann'\n- 'J. Woillez'\n- 'A. Wojtczak'\nbibliography:" +"---\nabstract: 'When can a unimodular random planar graph be drawn in the Euclidean or the hyperbolic plane in a way that the distribution of the random drawing is isometry-invariant? This question was answered for one-ended unimodular graphs in [@benjamini2019invariant], using the fact that such graphs automatically have locally finite (simply connected) drawings into the plane. For the case of graphs with multiple ends the question was left open. We revisit Halin\u2019s graph theoretic characterization of graphs that have a locally finite embedding into the plane. Then we prove that such unimodular random graphs do have a locally finite invariant embedding into the Euclidean or the hyperbolic plane, depending on whether the graph is amenable or not.'\nauthor:\n- \u00c1d\u00e1m Tim\u00e1r and L\u00e1szl\u00f3 M\u00e1rton T\u00f3th\nbibliography:\n- 'refs.bib'\ntitle: A full characterization of invariant embeddability of unimodular planar graphs\n---\n\nIntroduction\n============\n\nConsider a random planar map embedded in the Euclidean or hyperbolic plane $M$ with an isometry-invariant distribution. Simple examples include a lattice shifted by a suitable random isometry or a Voronoi tessellation coming from some invariant point process in $M$ [@moller2012lectures; @benjamini2001percolation]. If the expected number of vertices in a unit area is finite, then one can condition" +"---\nabstract: 'Quantum computing represents a radical departure from conventional approaches to information processing, offering the potential for solving problems that can never be approached classically. While large scale quantum computer hardware is still in development, several quantum computing systems have recently become available as commercial cloud services. We compare the performance of these systems on several simple quantum circuits and algorithms, and examine component performance in the context of each system\u2019s architecture.'\nauthor:\n- 'S. Blinov'\n- 'B. Wu'\n- 'C. Monroe'\ntitle: |\n Comparison of Cloud-Based Ion Trap and\\\n Superconducting Quantum Computer Architectures\n---\n\nIntroduction\n============\n\nQuantum computing is a revolutionary form of information processing that is capable of solving some computational problems faster than conventional (classical) approaches [@MikeAndIke; @Shor:1997]. Quantum information is represented by qubits, which can exist in superpositions of 0 and 1. Multiple qubits can be prepared in entangled states that generally possess an exponential number of superposed states, providing a quantum computer its power. Quantum algorithms can be expressed in terms of circuits involving universal discrete quantum gate operations that entangle qubits, akin to wiring transistors together to perform logic operations in classical computers. Recently, gate-based quantum computers have become available as cloud computing" +"---\nabstract: 'Traditional industry recommendation systems usually use data in a single domain to train models and then serve the domain. However, a large-scale commercial platform often contains multiple domains, and its recommendation system often needs to make click-through rate (CTR) predictions for multiple domains. Generally, different domains may share some common user groups and items, and each domain may have its own unique user groups and items. Moreover, even the same user may have different behaviors in different domains. In order to leverage all the data from different domains, a single model can be trained to serve all domains. However, it is difficult for a single model to capture the characteristics of various domains and serve all domains well. On the other hand, training an individual model for each domain separately does not fully use the data from all domains. In this paper, we propose the Star Topology Adaptive Recommender (STAR) model to train a single model to serve all domains by leveraging data from all domains simultaneously, capturing the characteristics of each domain, and modeling the commonalities between different domains. Essentially, the network of each domain consists of two factorized networks: one centered network shared by all domains" +"---\nabstract: 'Martian araneiform terrain, located in the Southern polar regions, consists of features with central pits and radial troughs which are thought to be associated with the solid state greenhouse effect under a CO$_{2}$ ice sheet. Sublimation at the base of this ice leads to gas buildup, fracturing of the ice and the flow of gas and entrained regolith out of vents and onto the surface. There are two possible pathways for the gas: through the gap between the ice slab and the underlying regolith, as proposed by @Kieffer2007, or through the pores of a permeable regolith layer, which would imply that regolith properties can control the spacing between adjacent spiders, as suggested by @Hao. We test this hypothesis quantitatively in order to place constraints on the regolith properties. Based on previously estimated flow rates and thermophysical arguments, we suggest that there is insufficient depth of porous regolith to support the full gas flow through the regolith. By contrast, free gas flow through a regolith\u2013ice gap is capable of supplying the likely flow rates for gap sizes on the order of a centimetre. This size of gap can be opened in the centre of a spider feature by gas" +"---\nabstract: 'The primary challenge in solving kinetic equations, such as the Vlasov equation, is the high-dimensional phase space. In this context, dynamical low-rank approximations have emerged as a promising way to reduce the high computational cost imposed by such problems. However, a major disadvantage of this approach is that the physical structure of the underlying problem is not preserved. In this paper, we propose a dynamical low-rank algorithm that conserves mass, momentum, and energy as well as the corresponding continuity equations. We also show how this approach can be combined with a conservative time and space discretization.'\naddress:\n- 'Department of Mathematics, University of Innsbruck, Austria'\n- 'Physics Division, Lawrence Livermore National Laboratory, California, USA'\nauthor:\n- Lukas Einkemmer\n- Ilon Joseph\nbibliography:\n- 'references.bib'\ntitle: 'A mass, momentum, and energy conservative dynamical low-rank scheme for the Vlasov equation'\n---\n\ndynamical low-rank approximation, conservative numerical methods, complexity reduction, Vlasov equation, kinetic equation\n\nIntroduction\n============\n\nSolving kinetic equations efficiently is important in applications ranging from plasma physics to radiative transfer. The main challenge in this context is the up to six-dimensional phase space and the associated unfavorable scaling of computational cost and memory requirements, usually referred to as the curse" +"---\nabstract: 'Systems aiming to aid consumers in their decision-making (e.g., by implementing persuasive techniques) are more likely to be effective when consumers trust them. However, recent research has demonstrated that the machine learning algorithms that often underlie such technology can act unfairly towards specific groups (e.g., by making more favorable predictions for men than for women). An undesired disparate impact resulting from this kind of algorithmic unfairness could diminish consumer trust and thereby undermine the purpose of the system. We studied this effect by conducting a between-subjects user study investigating how (gender-related) disparate impact affected consumer trust in an app designed to improve consumers\u2019 financial decision-making. Our results show that disparate impact decreased consumers\u2019 trust in the system and made them less likely to use it. Moreover, we find that trust was affected to the same degree across consumer groups (i.e., advantaged and disadvantaged users) despite both of these consumer groups recognizing their respective levels of personal benefit. Our findings highlight the importance of fairness in consumer-oriented artificial intelligence systems.'\nauthor:\n- 'Tim Draws^()^'\n- Zolt\u00e1n Szl\u00e1vik\n- Benjamin Timmermans\n- Nava Tintarev\n- 'Kush R. Varshney'\n- Michael Hind\nbibliography:\n- 'library.bib'\ntitle: Disparate Impact Diminishes Consumer Trust" +"---\nabstract: |\n In high energy particle collisions the shape of the event, i.e. the relative distribution of particles in momentum space, is often used to try to select events with certain topologies. It is claimed that an event shape observable like transverse sphericity is able to discriminate between jet-like events and events that are dominated by soft production from the underlying event.\n\n In this paper we investigate the relationship between the shape of the event and the number of jets found in the respective event for both $e^{+}e^{-}$ and pp collisions using the PYTHIA model. In $e^{+}e^{-}$ collisions, we find that the transverse sphericity of the event can be used effectively to either enhance or suppress the fraction of jets found in the selected sample, and can even discriminate between single, two, and multi-jet topologies. However, contrary to current literature, we find that in pp collisions this does not hold. It is shown that the transverse sphericity as well as the particle multiplicity is sensitive to the number of multi-parton interactions.\nauthor:\n- |\n M. Sas^1,2^, J. Schoppink^1^\\\n ^1^Institute for Subatomic Physics, Utrecht University/Nikhef, Utrecht, Netherlands\\\n ^2^Physics Department, Yale University, New Haven CT, U.S.A.\\\ntitle: 'Event shapes and jets" +"---\nabstract: 'With new applications for radar networks such as automotive control or indoor localization, the need for spectrum sharing and general interoperability is expected to rise. This paper describes the application of multi-player bandit algorithms for waveform selection to a distributed cognitive radar network that must coexist with a communications system. Specifically, we make the assumption that radar nodes in the network have no dedicated communication channel. As we will discuss later, nodes can communicate *indirectly* by taking actions which intentionally interfere with other nodes and observing the resulting collisions. The radar nodes attempt to optimize their own spectrum utilization while avoiding collisions, not only with each other, but with the communications system. The communications system is assumed to statically occupy some subset of the bands available to the radar network. First, we examine models that assume each node experiences equivalent channel conditions, and later examine a model that relaxes this assumption.'\nauthor:\n- 'William W. Howard, Charles E. Thornton, Anthony F. Martone, R. Michael Buehrer [^1] [^2] [^3]'\nbibliography:\n- 'bibli.bib'\ntitle: 'Multi-player Bandits for Distributed Cognitive Radar'\n---\n\nradar networks, multi-arm-bandit, cognitive radar, reinforcement learning\n\nIntroduction\n============\n\nWith the advent of fifth-generation (5G) cellular technologies, a large" +"---\nabstract: 'We present a new Monte Carlo event generator for the production of a top-quark pair in association with a $W^\\pm$ boson at hadron colliders in the framework. We consider the next-to-leading-order QCD corrections to the $pp\\to t\\tb W^\\pm$ cross section, corresponding to the $\\mathcal{O}(\\alpha_s^3\\alpha)$ and $\\mathcal{O}(\\alpha_s\\alpha^3)$ terms in the perturbative expansion of the parton-level cross section, and model the decays of $W$ and top quarks at leading order retaining spin correlations. The fixed-order QCD calculation is further interfaced with the parton-shower event generator via the method as implemented in the . The corresponding code is now part of the public repository of the . We perform a comparison of different event generators for both the case of inclusive production and the case of the two same-sign leptons signature at the Large Hadron Collider operating at a center-of-mass energy of $13~\\TeV$. We investigate theoretical uncertainties in the modelling of the fiducial volume stemming from missing higher-order corrections, the different parton shower matching schemes, and the modelling of decays. We find that the subleading contribution at $\\mathcal{O}(\\alpha_s\\alpha^3)$ is particularly sensitive to differences in the matching scheme and higher-order parton shower effects. We observe that in particular jet observables can differ" +"---\nabstract: 'Using the shared-private paradigm and adversarial training has significantly improved the performances of multi-domain text classification (MDTC) models. However, there are two issues for the existing methods. First, instances from the multiple domains are not sufficient for domain-invariant feature extraction. Second, aligning on the marginal distributions may lead to fatal mismatching. In this paper, we propose a mixup regularized adversarial network (MRAN) to address these two issues. More specifically, the domain and category mixup regularizations are introduced to enrich the intrinsic features in the shared latent space and enforce consistent predictions in-between training instances such that the learned features can be more domain-invariant and discriminative. We conduct experiments on two benchmarks: The Amazon review dataset and the FDU-MTL dataset. Our approach on these two datasets yields average accuracies of 87.64% and 89.0% respectively, outperforming all relevant baselines.'\naddress: |\n $^{\\star}$ Carleton University, Ottawa, Canada\\\n $^{\\dagger}$ University of Ottawa, Ottawa, Canada\nbibliography:\n- 'strings.bib'\ntitle: 'Mixup Regularized Adversarial Networks for Multi-Domain Text Classification'\n---\n\nMulti-domain text classification, mixup, adversarial training\n\nIntroduction\n============\n\nText classification is a fundamental task in natural language processing (NLP) and has been successfully applied in a wide variety of applications, such as spam detection [@ngai2011application]," +"---\nauthor:\n- Qian Wang\n- Yurong Chen\nbibliography:\n- 'ref.bib'\ntitle: 'The Tight Bound for Pure Price of Anarchy in an Extended Miner\u2019s Dilemma Game'\n---\n\nIntroduction\n============\n\nBitcoin, assumed to be one of the most successful applications of blockchain, has gained considerable attention since its inception in 2008 [@nakamoto2008peer]. One research direction of interest is to study its security and potential attacks. Bitcoin\u2019s security mainly comes from two sources: the blockchain data structure and proof-of-work (PoW) consensus protocol. Blockchain is an open, transparent, decentralized digital ledger that can validate transactions between two parties without third-party authentication. Transaction records are stored as blocks linked by hash pointers. PoW is used to decide who gets the power of authorizing the next valid block. In this protocol, miners have to solve a computationally difficult puzzle, and the first miner working out the solution can announce his block and get the block reward. This difficult puzzle is specially designed so that miners spending more mining power have larger probabilities to solve it. Due to fierce competition in Bitcoin system, it may take months, even years, for a single miner to find the solution, thus miners tend to form mining pools to reduce" +"---\nabstract: 'Guided by the intuition of coherent superposition of causal relations, recent works presented quantum processes without classical common-cause and direct-cause explanation, that is, processes which cannot be written as probabilistic mixtures of quantum common-cause and quantum direct-cause relations (CCDC). In this work, we analyze the minimum requirements for a quantum process to fail to admit a CCDC explanation and present \u201csimple\u201d processes, which we prove to be the most robust ones against general noise. These simple processes can be realized by preparing a maximally entangled state and applying the identity quantum channel, thus not requiring an explicit coherent mixture of common-cause and direct-cause, exploiting the possibility of a process to have both relations simultaneously. We then prove that, although all bipartite direct-cause processes are bipartite separable operators, there exist bipartite separable processes which are not direct-cause. This shows that the problem of deciding [[whether]{}]{} a process is direct-cause *is not* equivalent to entanglement certification and points out the limitations of entanglement methods to detect non-classical CCDC processes. We also present a semi-definite programming hierarchy that can detect and quantify the non-classical CCDC robustnesses of every non-classical CCDC process. Among other results, our numerical methods allow us to show" +"---\nabstract: 'The imbalanced data classification remains a vital problem. The key is to find such methods that classify both the minority and majority class correctly. The paper presents the classifier ensemble for classifying binary, non-stationary and imbalanced data streams where the *Hellinger Distance* is used to prune the ensemble. The paper includes an experimental evaluation of the method based on the conducted experiments. The first one checks the impact of the base classifier type on the quality of the classification. In the second experiment, the *Hellinger Distance Weighted Ensemble* (hdwe) method is compared to selected state-of-the-art methods using a statistical test with two base classifiers. The method was profoundly tested based on many imbalanced data streams and obtained results proved the hdwe method\u2019s usefulness.'\naddress: 'Wroclaw University of Science and Technology, Wybrzeze Wyspianskiego 27, 50-370 Wroclaw, Poland'\nauthor:\n- Joanna Grzyb\n- Jakub Klikowski\n- Micha\u0142\u00a0Wo\u017aniak\nbibliography:\n- 'bibliography.bib'\ntitle: Hellinger Distance Weighted Ensemble for Imbalanced Data Stream Classification\n---\n\nclassifier ensemble ,data stream ,Hellinger Distance ,imbalanced data ,pattern classification\n\nIntroduction {#sec:introduction}\n============\n\nResearchers are still working on the imbalanced data stream classification. The problem arises in the reality and there are not many solutions to" +"---\nabstract: 'Inspired by the success of WaveNet in multi-subject speech synthesis, we propose a novel neural network based on causal convolutions for multi-subject motion modeling and generation. The network can capture the intrinsic characteristics of the motion of different subjects, such as the influence of skeleton scale variation on motion style. Moreover, after fine-tuning the network using a small motion dataset for a novel skeleton that is not included in the training dataset, it is able to synthesize high-quality motions with a personalized style for the novel skeleton. The experimental results demonstrate that our network can model the intrinsic characteristics of motions well and can be applied to various motion modeling and synthesis tasks.'\nauthor:\n- |\n Shuaiying Hou, *State Key Lab of CAD&CG, Zhejiang University, China,* `11721044@zju.edu.cn`\\\n Congyi Wang, Xmov, China, `artwang007@gmail.com`\\\n Wenlin Zhuang, *Southeast University, China,* `wlzhuang@seu.edu.cn`\\\n Yu Chen, *Xmov, China,* `chenyu@xmov.ai`\\\n Hujun Bao, *State Key Lab of CAD&CG, Zhejiang University, China,* `bao@cad.zju.edu.cn`\\\n Yangang Wang, *Southeast University, China,* `yangangwang@seu.edu.cn`\\\n Jinxiang Chai, *Xmov, China,* `chaijinxiang@xmov.ai`\\\n Weiwei Xu[^1], *State Key Lab of CAD&CG, Zhejiang University, China,* `xww@cad.zju.edu.cn`\nbibliography:\n- 'refs.bib'\ntitle: 'A causal convolutional neural network for multi-subject motion modeling and generation[^2]'\n---\n\nIntroduction {#sec:introduction}\n============\n\nHuman-motion generation is" +"---\nabstract: 'The growth of single wall carbon nanotubes (SWCNT) inside host SWCNTs remains a compelling alternative to the conventional catalyst induced growth processes. It not only provides a catalyst free process but the ability to control the constituents of the inner tube if appropriate starting molecules are used. We report herein the growth of inner SWCNTs from $^{13}$C labeled toluene and natural carbon C$_{60}$. The latter molecule is essentially a stopper which acts to retain the smaller toluene. The Raman spectrum of the inner nanotubes is anomalous as it contains a highly isotope shifted \u201ctail\u201d, which cannot be explained by assuming a homogeneous distribution of the isotopes. [Semi-empirical]{} calculations of the Raman modes indicate that this unsual effect is explicable if small clusters of $^{13}$C are assumed. This indicates the absence of carbon diffusion during the inner tube growth. When combined with appropriate molecular recognition, this may enable a molecular engineering of the atomic and isotope composition of the inner tubes.'\nauthor:\n- 'J. Koltai'\n- 'H. Kuzmany'\n- 'T. Pichler'\n- 'F. Simon'\ntitle: 'Linearly controlled arrangement of $^{13}$C isotopes in single-wall carbon nanotubes'\n---\n\nIntroduction\n============\n\nThe growth of carbon nanotubes from carbonaceous materials, which are encapsulated" +"---\nabstract: 'Objective: Surgical activity recognition is a fundamental step in computer-assisted interventions. This paper reviews the state-of-the-art in methods for automatic recognition of fine-grained gestures in robotic surgery focusing on recent data-driven approaches and outlines the open questions and future research directions. Methods: An article search was performed on 5 bibliographic databases with combinations of the following search terms: robotic, robot-assisted, JIGSAWS, surgery, surgical, gesture, fine-grained, surgeme, action, trajectory, segmentation, recognition, parsing. Selected articles were classified based on the level of supervision required for training and divided into different groups representing major frameworks for time series analysis and data modelling. Results: A total of 52 articles were reviewed. The research field is showing rapid expansion, with the majority of articles published in the last 4 years. Deep-learning-based temporal models with discriminative feature extraction and multi-modal data integration have demonstrated promising results on small surgical datasets. Currently, unsupervised methods perform significantly less well than the supervised approaches. Conclusion: The development of large and diverse open-source datasets of annotated demonstrations is essential for development and validation of robust solutions for surgical gesture recognition. While new strategies for discriminative feature extraction and knowledge transfer, or unsupervised and semi-supervised approaches, can mitigate the" +"---\nauthor:\n- 'H. Kim,[!!]{}'\n- 'B. Epel'\n- 'S. Sundramoorthy'\n- 'H.-M. Tsai'\n- 'E. Barth'\n- 'I. Gertsenshteyn'\n- 'H. Halpern'\n- 'Y. Hua'\n- 'Q. Xie'\n- 'C.-T. Chen'\n- 'C.-M. Kao'\nbibliography:\n- 'petepr.bib'\ntitle: 'Development of a PET/EPRI combined imaging system for assessing tumor hypoxia'\n---\n\nIntroduction\n============\n\nBecause of the rapid proliferation of cancer cells outgrowing blood supply, the amount of oxygen available is limited in some tumor regions distant from blood vessels. Tumor hypoxia \u00a0[@hockel_2001; @walsh_2014] refers to the lower oxygen concentration in tumor tissue, typically below 10 torr, and it is a characteristic feature of human and animal solid tumors. Hypoxia is known to promote tumor progression and migration by changing its metabolism to adapt to oxygen deprived environment\u00a0[@sullivan_2007; @lu_2010]. It has been known that hypoxic tumors are more resistant to radiation therapy\u00a0[@gray_1953; @brizel_1999; @moeller_2007], and require additional dose delivery for effective treatment. Therefore, precise delineation of hypoxic tumor region is essential for targeting only hypoxic sub-volume within the tumor in radiation therapy to improve the treatment outcome\u00a0[@rajendran_2006; @thorwarth_2007; @lee_2014; @epel_2019].\n\nIn the clinic, PET imaging with radio-tracers such as $^{18}$F-fluoromisonidazole (F-MISO)\u00a0[@rasey_1987; @rajendran_2015] has been used for hypoxic" +"---\nabstract: 'Elliptical galaxies have dynamically hot ($\\sigmaOne \\sim 100 \\endash 300\\ \\kms$) populations of stars, and presumably, smaller objects like comets. Because interstellar minor bodies are moving much faster, they hit planets harder and more often than in the local Galaxy. I estimate the rates for Chicxulub-scale impacts on an Earth-size planet in elliptical galaxies as a potential habitability constraint on intelligent life. Around most stars in a normal elliptical galaxy, these planets receive only $\\sim 0.01 \\endash 0.1\\ \\Gyr^{-1}$, although hazardous rates may be common in certain compact early-type galaxies and red nuggets. About $\\sim \\editMathOne{5}\\%$ of the stellar mass is in a region where the rate is $>10\\ \\Gyr^{-1}$, large enough to dominate the mass extinction rate. This suggests that elliptical galaxies have an exclusion zone parsecs in radius around their centers for the evolution of intelligent life.'\nauthor:\n- 'Brian C. Lacki'\nbibliography:\n- 'EllipticalComets\\_vProof\\_arXiv.bib'\ntitle: 'Life in Elliptical Galaxies: Hot Spheroids, Fast Stars, Deadly Comets?'\n---\n\nIntroduction {#sec:Intro}\n============\n\nGalactic habitability is the notion that galactic-scale environmental factors affect the abundance of life-friendly planets. Thus far, the main identified factors are (1) stellar population metallicity, which can limit the frequency of planets [@Gonzalez01; @Lineweaver04]; (2)" +"---\nabstract: 'Solving complex problems using reinforcement learning necessitates breaking down the problem into manageable tasks, and learning policies to solve these tasks. These policies, in turn, have to be controlled by a master policy that takes high-level decisions. Hence learning policies involves hierarchical decision structures. However, training such methods in practice may lead to poor generalization, with either sub-policies executing actions for too few time steps or devolving into a single policy altogether. In our work, we introduce an alternative approach to learn such skills *sequentially* without using an overarching hierarchical policy. We propose this method in the context of environments where a major component of the objective of a learning agent is to prolong the episode for as long as possible. We refer to our proposed method as *Sequential Soft Option Critic*. We demonstrate the utility of our approach on navigation and goal-based tasks in a flexible simulated 3D navigation environment that we have developed. We also show that our method outperforms prior methods such as Soft Actor-Critic and Soft Option Critic on various environments, including the Atari River Raid environment and the Gym-Duckietown self-driving car simulator.'\nauthor:\n- |\n Ambedkar Dukkipati and Rajarshi Banerjee and\\\n Ranga Shaarad" +"---\nabstract: 'Marangoni instabilities can emerge when a liquid interface is subjected to a concentration or temperature gradient. It is generally believed that for these instabilities bulk effects like buoyancy are negligible as compared to interfacial forces, especially on small scales. Consequently, the effect of a stable stratification on the Marangoni instability has hitherto been ignored. Here we report, for an immiscible drop immersed in a stably stratified ethanol-water mixture, a new type of oscillatory solutal Marangoni instability which is triggered once the stratification has reached a critical value. We experimentally explore the parameter space spanned by the stratification strength and the drop size and theoretically explain the observed crossover from levitating to bouncing by balancing the advection and diffusion around the drop. Finally, the effect of the stable stratification on the Marangoni instability is surprisingly amplified in confined geometries, leading to an earlier onset.'\nauthor:\n- Yanshen Li\n- Christian Diddens\n- Andrea Prosperetti\n- Detlef Lohse\ntitle: Marangoni instability of a drop in a stably stratified liquid\n---\n\nA concentration or temperature gradient applied to an interface can induce a Marangoni instability of the motionless state, resulting in a steady convection. Similarly, the steady state Marangoni convection can" +"---\nabstract: |\n In this paper, we consider symmetric $\\alpha$-stable processes on (unbounded) horn-shaped regions which are non-uniformly $C^{1,1}$ near infinity. By using probabilistic approaches extensively, we establish two-sided Dirichlet heat estimates of such processes for all time. The estimates are very sensitive with respect to the reference function corresponding to each horn-shaped region. Our results also cover the case that the associated Dirichlet semigroup is not intrinsically ultracontractive. A striking observation from our estimates is that, even when the associated Dirichlet semigroup is intrinsically ultracontractive, the so-called Varopoulos-type estimates do not hold for symmetric stable processes on horn-shaped regions.\n\n **Keywords:** Dirichlet heat kernel; fractional Laplacian; horn-shaped region; L\u00e9vy system\n\n **MSC 2020:** 60G51; 60G52; 60J25; 60J76.\nauthor:\n- Xin ChenPanki Kim Jian Wang\ntitle: '**Two-sided Dirichlet heat estimates of symmetric stable processes on horn-shaped regions**'\n---\n\n[^1] [^2] [^3]\n\nBackground and main results {#section1}\n===========================\n\nDirichlet heat kernel is the fundamental solution of the heat equation with zero exterior conditions, which plays an important role in the study of Cauchy or Poisson problems with Dirichlet conditions. While the research on estimates and properties for the Dirichlet heat kernel of the Laplacian has a long history and fruitful results (see [@GS]" +"---\nabstract: 'We design and analyze an algorithm for first-order stochastic optimization of a large class of functions on ${{\\mathbb{R}}}^d$. In particular, we consider the *variationally coherent* functions which can be convex or non-convex. The iterates of our algorithm on variationally coherent functions converge almost surely to the global minimizer ${\\boldsymbol{x}}^*$. Additionally, the very same algorithm with the same hyperparameters, after $T$ iterations guarantees on convex functions that the expected suboptimality gap is bounded by $\\widetilde{O}({\\left\\|{{\\boldsymbol{x}}^* - {\\boldsymbol{x}}_0}\\right\\|} T^{-1/2+\\epsilon})$ for any $\\epsilon>0$. It is the first algorithm to achieve both these properties at the same time. Also, the rate for convex functions essentially matches the performance of parameter-free algorithms. Our algorithm is an instance of the Follow The Regularized Leader algorithm with the added twist of using *rescaled gradients* and time-varying linearithmic regularizers.'\nauthor:\n- |\n Francesco Orabona\\\n Boston University, Boston, MA\\\n [francesco@orabona.edu](francesco@orabona.edu)\n- |\n D\u00e1vid P\u00e1l\\\n New York, NY\\\n [davidko.pal@gmail.com](davidko.pal@gmail.com)\\\nbibliography:\n- 'biblio.bib'\ntitle: 'Parameter-free Stochastic Optimization of Variationally Coherent Functions'\n---\n\nIntroduction\n============\n\nWe consider the problem of finding the minimizer of a differentiable function $F:{{\\mathbb{R}}}^d \\to {{\\mathbb{R}}}$ using access only to noisy gradients of the function. This is a fundamental problem in stochastic optimization and machine learning." +"---\nabstract: 'Here we aim to explore the origin of the strong lines to reimagine the chemistry of protoplanetary disks. There are a few key aspects that drive our analysis. First, is detected in young and old systems, hinting at a long-lived chemistry. Second, as a radical, is rapidly destroyed, within $<$1000 yr. These two statements hint that the chemistry responsible for emission must be predominantly in the gas-phase and must be in equilibrium. Combining new and published chemical models we find that elevating the total volatile (gas and ice) C/O ratio is the only natural way to create a long lived, high abundance. Most of the resides in gas with a $F_\\mathrm{UV}/n_\\mathrm{gas} \\sim 10^{-7}\\,G_0\\, \\mathrm{cm}^3$. To elevate the volatile C/O ratio, additional carbon has to be released into the gas to enable an equilibrium chemistry under oxygen-poor conditions. Photo-ablation of carbon-rich grains seems the most straightforward way to elevate the C/O ratio above 1.5, powering a long-lived equilibrium cycle. The regions at which the conditions are optimal for the presence of high C/O ratio and elevated abundances in the gas disk set by the $F_\\mathrm{UV}/n_\\mathrm{gas}$ condition lie just outside the pebble disk as well as possibly in disk gaps." +"---\nabstract: 'Since its launch, the Alpha Magnetic Spectrometer \u2013 02 (AMS-02) has delivered outstanding quality measurements of the spectra of cosmic-ray (CR) species, $\\bar{p}$, $e^{\\pm}$, and nuclei, $_1$H\u2013$_8$O, $_{10}$Ne, $_{12}$Mg, $_{14}$Si, which resulted in a number of breakthroughs. One of the latest long awaited surprises is the spectrum of $_{26}$Fe just published by AMS-02. Because of the large fragmentation cross section and large ionization energy losses, most of CR iron at low energies is local, and may harbor some features associated with relatively recent supernova (SN) activity in the solar neighborhood. Our analysis of the new AMS-02 results together with Voyager 1 and ACE-CRIS data reveals an unexpected bump in the iron spectrum and in the Fe/He, Fe/O, and Fe/Si ratios at 1\u20132 GV, while a similar feature in the spectra of He, O, Si, and in their ratios is absent, hinting at a local source of low-energy CRs. The found excess extends the recent discoveries of radioactive $^{60}$Fe deposits in terrestrial and lunar samples, and in CRs. We provide an updated local interstellar spectrum (LIS) of iron in the energy range from 1 MeV nucleon$^{-1}$ to $\\sim$10 TeV nucleon$^{-1}$. Our calculations employ the [GalProp]{}\u2013[HelMod]{} framework that" +"---\nabstract: 'The 21\u00a0cm linear polarization due to Thomson scattering off free electrons can probe the distribution of neutral hydrogen in the intergalactic medium during the epoch of reionization, complementary to the 21\u00a0cm temperature fluctuations. Previous study [@2005ApJ...635....1B] estimated the strength of polarization with a toy model and claimed that it can be detected with 1-month observation of the Square Kilometre Array (SKA). Here we revisit this investigation with account of nonlinear terms due to inhomogeneous reionization, using seminumerical reionization simulations to provide the realistic estimation of the 21\u00a0cm TE and EE angular power spectra ($C^{\\rm TE}_\\ell$ and $C^{\\rm EE}_\\ell$). We find that (1) both power spectra are enhanced on sub-bubble scales but suppressed on super-bubble scales, compared with previous results; (2) $C^{\\rm TE}_\\ell$ displays a zero-crossing at $\\ell<100$, and its angular scale is sensitive to the scale-dependence of H\u00a0[I]{} bias on large scales; (3) the ratios of the power spectrum to its maximum value during reionization at a given $\\ell$, i.e.\u00a0$C^{\\rm TE}_\\ell / C^{\\rm TE}_{\\ell,{\\rm max}} $ and $C^{\\rm EE}_{\\ell}/C^{\\rm EE}_{\\ell,{\\rm max}}$, show robust correlations with the global ionized fraction. However, measurement of this signal will be very challenging not only because the overall" +"---\nabstract: 'Clinical diagnosis, which aims to assign diagnosis codes for a patient based on the clinical note, plays an essential role in clinical decision-making. Considering that manual diagnosis could be error-prone and time-consuming, many intelligent approaches based on clinical text mining have been proposed to perform automatic diagnosis. However, these methods may not achieve satisfactory results due to the following challenges. First, most of the diagnosis codes are rare, and the distribution is extremely unbalanced. Second, existing methods are challenging to capture the correlation between diagnosis codes. Third, the lengthy clinical note leads to the excessive dispersion of key information related to codes. To tackle these challenges, we propose a novel framework to combine the inheritance-guided hierarchical assignment and co-occurrence graph propagation for clinical automatic diagnosis. Specifically, we propose a hierarchical joint prediction strategy to address the challenge of unbalanced codes distribution. Then, we utilize graph convolutional neural networks to obtain the correlation and semantic representations of medical ontology. Furthermore, we introduce multi attention mechanisms to extract crucial information. Finally, extensive experiments on MIMIC-III dataset clearly validate the effectiveness of our method.'\nauthor:\n- Yichao Du\n- Pengfei Luo\n- Xudong Hong\n- Tong Xu\n- Zhe Zhang\n-" +"---\nabstract: 'The proton drip-line nucleus $^{17}$Ne is investigated experimentally in order to determine its two-proton halo character. A fully exclusive measurement of the $^{17}$Ne$(p,2p)^{16}$F$^*\\rightarrow ^{15}$O$+p$ quasi-free one-proton knockout reaction has been performed at GSI at around 500\u00a0MeV/nucleon beam energy. All particles resulting from the scattering process have been detected. The relevant reconstructed quantities are the angles of the two protons scattered in qusi-elastic kinematics, the decay of $^{16}$F into $^{15}$O (including $\\gamma$ decays from excited states) and a proton, as well as the $^{15}$O$+p$ relative-energy spectrum and the $^{16}$F momentum distributions. The latter two quantities allow an independent and consistent determination of the ratio of $l=0$ and $l=2$ motion of the valence protons in $^{17}$Ne. With a resulting relatively small $l=0$ component of only around 35(3)%, it is concluded that $^{17}$Ne exhibits a rather modest halo character only. The quantitative agreement of the two values deduced from the energy spectrum and the momentum distributions supports the theoretical treatment of the calculation of momentum distributions after quasi-free knockout reactions at high energies by taking into account distortions based on the Glauber theory. Moreover, the experimental data allow the separation of valence-proton knockout and knockout from the $^{15}$O core. The" +"---\nabstract: 'For scope and context, the idea we\u2019ll describe below, Compact Java Monitors, is intended as a potential replacement implementation for the \u201csynchronized\u201d construct in the HotSpot JVM. The readers is assumed to be familiar with current HotSpot implementation.'\nauthor:\n- Dave Dice\n- Alex Kogan\nbibliography:\n- 'cjm.bib'\ntitle: Compact Java Monitors\n---\n\n[ ]{}\n\nIntroduction\n============\n\n**Compact Java Monitors (CJM)** are based on the Compact NUMA-Aware Locks (CNA) algorithm, but ignoring the *NUMA-Aware* property and focusing on the *Compact* aspect. CNA is itself a variation on the gold-standard MCS (Mellor-Crummey Scott) [@tocs91-MellorCrummey] queue-based lock algorithm [^1]. Underlying much of the following design is our approach from Compact NUMA-Aware Locks (CNA) which was published in EuroSys 2019 [@EuroSys19-CNA; @arxiv-CNA] and is being integrated into the Linux kernel as a replacement for the existing low-level *qspinlock* construct [@linux-locks; @Long13], which is itself based on MCS. CNA is itself a variation on classic MCS. One of the key ideas in CNA is propagating values of interest from the MCS owner\u2019s queue node into the successor, which allows the lock body to remain compact \u2013 just one word. Specifically, fields that would normally appear in the body of a lock are" +"---\nabstract: 'A point of a metric space is called a geodesic star with $m$ arms if it is the endpoint of $m$ disjoint geodesics. For every $m\\in\\{1,2,3,4\\}$, we prove that the set of all geodesic stars with $m$ arms in the Brownian sphere has dimension $5-m$. This complements recent results of Miller and Qian, who proved that this dimension is smaller than or equal to $5-m$.'\nauthor:\n- 'Jean-Fran\u00e7ois Le Gall'\ndate: 'Universit\u00e9 Paris-Saclay'\ntitle: 'Geodesic stars in random geometry[^1]'\n---\n\nIntroduction\n============\n\nThis work is concerned with the continuous models of random geometry that have been studied extensively in the recent years. In particular, we consider the Brownian sphere or Brownian map, which is the scaling limit in the Gromov-Hausdorff sense of triangulations or quadrangulations of the sphere with $n$ faces chosen uniformly at random, and of much more general random planar maps (see in particular [@Abr; @AA; @BJM; @Uniqueness; @Mar; @Mie-Acta]). We are primarily interested in the study of geodesics in the Brownian sphere, but our main result remains valid in the related models called the Brownian plane [@Plane; @CLG] and the Brownian disk [@BM; @Disks].\n\nRecall that a geodesic in a metric space $(E,d)$ is a" +"---\nabstract: |\n Recently a mechanism called stagnation detection was proposed that automatically adjusts the mutation rate of evolutionary algorithms when they encounter local optima. The so-called [SD-(1+1)\u00a0EA]{}introduced by Rajabi and Witt (GECCO\u00a02020) adds stagnation detection to the classical [(1+1)\u00a0EA]{}with standard bit mutation, which flips each bit independently with some mutation rate, and raises the mutation rate when the algorithm is likely to have encountered local optima.\n\n In this paper, we investigate stagnation detection in the context of the $k$-bit flip operator of randomized local search that flips $k$ bits chosen uniformly at random and let stagnation detection adjust the parameter\u00a0$k$. We obtain improved runtime results compared to the [SD-(1+1)\u00a0EA]{}amounting to a speed-up of up to\u00a0$e=2.71\\dots$ Moreover, we propose additional schemes that prevent infinite optimization times even if the algorithm misses a working choice of\u00a0$k$ due to unlucky events. Finally, we present an example where standard bit mutation still outperforms the local $k$-bit flip with stagnation detection.\nauthor:\n- |\n Amirhossein Rajabi\\\n Technical University of Denmark\\\n Kgs. Lyngby\\\n Denmark\\\n amraj@dtu.dk\\\n- |\n Carsten Witt\\\n Technical University of Denmark\\\n Kgs. Lyngby\\\n Denmark\\\n cawi@dtu.dk\\\nbibliography:\n- 'references.bib'\ntitle: |\n Stagnation Detection with\\\n Randomized Local Search" +"---\nauthor:\n- 'Qin-Qin Wang'\n- 'Ri-Zhou Liang'\n- 'Ji-Qiang Zhang'\n- 'Guo-Zhong Zheng'\n- Lin Ma\n- 'Li Chen[^1]'\ntitle: 'Emergent route towards cooperation in interacting games: the dynamical reciprocity'\n---\n\nIntroduction\n============\n\nRecent withdrawals of the United States from a couple of \u201cgroups\u201d like WHO, Paris Agreement, UNESCO etc. signifies a degraded cooperation at the global scale. Any solution to this sort of problems requires an understanding of what processes drive and maintain human cooperation and what measures or institutions could be implemented for its promotion. The key question to be addressed is: why entities help each other who could potentially be in competition and incur a cost to themselves? As the paradigm of *homo economicus* shows, people always try to maximize their earnings and avoid irrational investments, which inevitably leads to the tragedy of the commons [@hardin1968tragedy].\n\nImportant progresses have been made with the help of evolutionary game theory\u00a0[@Nowak2004Evolutionary] by analysing the stylized social dilemmas such as prisoner\u2019s dilemma and the public goods game. Several mechanisms are proposed [@Nowak2006Five] in the past several decades, such as reward and punishment [@Sigmund2001Reward], social diversity\u00a0[@Santos2008Social], direct\u00a0[@trivers1971evolution] or indirect reciprocity\u00a0[@nowak1998evolution], kin\u00a0[@hamilton1964genetical] or group selection [@keller1999levels; @Queller1964Group]," +"---\nabstract: 'We present [[**`TruthBot`**]{}]{}\u00a0(name is anonymized), an all-in-one multilingual conversational chatbot designed for seeking truth (trustworthy and verified information) on specific topics. It helps users to obtain information specific to certain topics, fact-check information, and get recent news. The chatbot learns the intent of a query by training a deep neural network from the data of the previous intents and responds appropriately when it classifies the intent in one of the classes above. Each class is implemented as a separate module which uses either its own curated knowledge-base or searches the web to obtain the correct information. The topic of the chatbot is currently set to COVID-19. However, the bot can be easily customized to any topic-specific responses. Our experimental results show that each module performs significantly better than its closest competitor, which is verified both quantitatively and through several user-based surveys in multiple languages. [[**`TruthBot`**]{}]{}\u00a0has been deployed in June 2020 and is currently running.'\nauthor:\n- 'Ankur Gupta$^*$'\n- 'Yash Varun$^*$'\n- 'Prarthana Das$^*$'\n- 'Nithya Muttineni$^*$'\n- 'Parth Srivastava$^*$'\n- Hamim Zafar\n- Tanmoy Chakraborty\n- Swaprava Nath\nbibliography:\n- 'abb.bib'\n- 'swaprava.bib'\n- 'ultimate.bib'\n- 'references.bib'\n- 'references\\_robots.bib'\n- 'master.bib'\ntitle: '**[[**`TruthBot`**]{}]{}: An Automated" +"---\nabstract: |\n 1.2pc The discrete fracture model (DFM) has been widely used in the simulation of fluid flow in fractured porous media. Traditional DFM uses the so-called hybrid-dimensional approach to treat fractures explicitly as low-dimensional entries (e.g. line entries in 2D media and face entries in 3D media) on the interfaces of matrix cells and then couple the matrix and fracture flow systems together based on the principle of superposition with the fracture thickness used as the dimensional homogeneity factor. Because of this methodology, DFM is considered to be limited on conforming meshes and thus may raise difficulties in generating high quality unstructured meshes due to the complexity of fracture\u2019s geometrical morphology. In this paper, we clarify that the DFM actually can be extended to non-conforming meshes without any essential changes. To show it clearly, we provide another perspective for DFM based on hybrid-dimensional representation of permeability tensor to describe fractures as one-dimensional line Dirac delta functions contained in permeability tensor. A finite element DFM scheme for single-phase flow on non-conforming meshes is then derived by applying Galerkin finite element method to it. Analytical analysis and numerical experiments show that our DFM automatically degenerates to the classical finite element" +"---\nabstract: '[Clinical trials involving novel immuno-oncology (IO) therapies frequently exhibit survival profiles which violate the proportional hazards assumption due to a delay in treatment effect, and in such settings, the survival curves in the two treatment arms may have a crossing before the two curves eventually separate. To flexibly model such scenarios, we describe a nonparametric approach for estimating the treatment arm-specific survival functions which constrains these two survival functions to cross at most once without making any additional assumptions about how the survival curves are related. A main advantage of our approach is that it provides an estimate of a crossing time if such a crossing exists, and moreover, our method generates interpretable measures of treatment benefit including crossing-conditional survival probabilities and crossing-conditional estimates of restricted residual mean life. We demonstrate the use and effectiveness of our approach with a large simulation study and an analysis of reconstructed outcomes from a recent combination-therapy trial.]{}[censored data; clinical trial; constrained estimation; immuno-oncology; non-proportional hazards]{}'\nauthor:\n- |\n Nicholas C. Henderson$^{1\\ast}$, Kijoeng Nam$^2$ and Dai Feng$^3$\\\n $^{1}$Department of Biostatistics, University of Michigan, Ann Arbor, MI, USA\\\n $^{2}$BARDS, Merck & Co., Inc., North Wales, PA, USA\\\n \\[2pt\\] $^{3}$Data and Statistical Sciences, AbbVie" +"---\nabstract: 'Under GRH, we establish a version of Duke\u2019s short-sum theorem for entire Artin $L$-functions. This yields corresponding bounds for residues of Dedekind zeta functions. All numerical constants in this work are explicit.'\naddress:\n- 'Department of Mathematics, Pomona College, 610 N. College Ave., Claremont, CA 91711'\n- 'School of Science, UNSW Canberra at the Australian Defence Force Academy, Northcott Drive, Campbell, ACT 2612'\nauthor:\n- Stephan Ramon Garcia\n- Ethan Simpson Lee\nbibliography:\n- 'EEALFDSSTDZR.bib'\ntitle: 'Explicit estimates for Artin $L$-functions: Duke\u2019s short-sum theorem and Dedekind Zeta Residues'\n---\n\n[^1]\n\nIntroduction\n============\n\nIn [@Duke], Duke proved a remarkable \u201cshort-sum theorem\u201d that relates the value of an Artin $L$-function at $s=1$ to a sum over an exceptionally small set of primes. To be more specific, if $L(s,\\chi)$ is an entire Artin $L$-function that satisfies the Generalized Riemann Hypothesis (GRH), Duke proved that $$\\label{eqn:Duke_original}\n\\log L(1,\\chi) \\,\\,=\\!\\!\\!\\! \\sum_{p \\leq (\\log N)^{1/2} } \\frac{\\chi(p)}{p} + O(1),$$ in which $p$ is a prime, $\\chi$ has degree $d$ and conductor $N$, and the implicit constant depends only upon $d$ [@Duke Prop.\u00a05]. Our main result is an explicit version of Duke\u2019s theorem (by \u201cexplicit\u201d we mean that there are no implied constants left" +"---\nauthor:\n- 'Junmou Chen$^1$,'\n- 'Chengcheng Han$^2$,'\n- 'Jin Min Yang$^{3,4}$,'\n- Mengchao Zhang$^1$\ndate: '2020.06.15'\ntitle: Probing bino NLSP at lepton colliders\n---\n\nIntroduction\n============\n\nDespite of no experimental evidence, supersymmetry remains one of the most compelling scenarios beyond the standard model. It not only alleviates the fine-tuning of Higgs mass contrasting to the fundamental scale, but also predicts the unification of the gauge couplings and provides a viable dark matter candidate. Currently the large hadron colliders (LHC) already set very strong limits on the mass scale of SUSY partners. For example, the gluino and squarks should be beyond 1-2 TeV [@Aad:2020nyj; @Aad:2020sgw] and the limits on electroweakinos and sleptons are around few hundreds GeV depending on the assumptions [@Aad:2019vvi; @Aad:2019qnd]. For a degenerate spectrum such as higgsinos or winos, the strongest bounds are from the LEP and their masses should be larger than around 100 GeV [@LEP]. However, there remains a possibility that a bino could be as light as few tens of GeV. Given the design of a future lepton colliders [@CEPCStudyGroup:2018ghi; @ILC; @Abada:2019zxq], it provides a good opportunity to look for such a light bino.\n\nIf bino is the lightest supersymmetric particle (LSP) as well" +"---\nabstract: 'Magneto-optical traps (MOTs) are widely used for laser cooling of atoms. We have developed a high-flux compact cold-atom source based on a pyramid MOT with a unique adjustable aperture that is highly suitable for portable quantum technology devices, including space-based experiments. The adjustability enabled an investigation into the previously unexplored impact of aperture size on the atomic flux, and optimisation of the aperture size allowed us to demonstrate a higher flux than any reported cold-atom sources that use a pyramid, LVIS, 3D-MOT or grating MOT. We achieved $2.0(1) \\times 10^{10}$atoms/s of $^{87}$Rb with a mean velocity of 32(1)m/s, FWHM of 27.6(9)m/s and divergence of 58(3)mrad. Halving the total optical power to 195mW caused only a 26% reduction of the flux, and a 33% decrease in mean velocity. Methods to further decrease the velocity as required have been identified. The low power consumption and small size make this design suitable for a wide range of cold-atom technologies.'\naddress: |\n Clarendon Laboratory, University of Oxford, Parks Road, Oxford, OX1 3PU, UK\\\n PA Consulting, Melbourn, SG8 6DP, UK\\\n School of Physics and Astronomy, University of Birmingham, Birmingham, B15 2TT, UK \nauthor:\n- 'Sean Ravenhall, Benjamin Yuen, and Chris Foot'\nbibliography:\n-" +"---\nabstract: 'We introduce S++, a simple, robust, and deployable framework for training a neural network (NN) using private data from multiple sources, using secret-shared secure function evaluation. In short, consider a virtual third party to whom every data-holder sends their inputs, and which computes the neural network: in our case, this virtual third party is actually a set of servers which individually learn nothing, even with a malicious (but non-colluding) adversary. Previous work in this area has been limited to just one specific activation function: ReLU, rendering the approach impractical for many use-cases. For the first time, we provide fast and verifiable protocols for **all** common activation functions and optimize them for running in a secret-shared manner. The ability to quickly, verifiably, and robustly compute exponentiation, softmax, sigmoid, etc., allows us to use previously written NNs without modification, vastly reducing developer effort and complexity of code. In recent times, ReLU has been found to converge much faster and be more computationally efficient as compared to non-linear functions like sigmoid or tanh. However, we argue that it would be remiss not to extend the mechanism to non-linear functions such as the logistic sigmoid, tanh, and softmax that are fundamental due" +"---\nabstract: 'In this paper, we present a novel tone mapping algorithm that can be used for displaying wide dynamic range (WDR) images on low dynamic range (LDR) devices. The proposed algorithm is mainly motivated by the logarithmic response and local adaptation features of the human visual system (HVS). HVS perceives luminance differently when under different adaptation levels, and therefore our algorithm uses functions built upon different scales to tone map pixels to different values. Functions of large scales are used to maintain image brightness consistency and functions of small scales are used to preserve local detail and contrast. An efficient method using local variance has been proposed to fuse the values of different scales and to remove artifacts. The algorithm utilizes integral images and integral histograms to reduce computation complexity and processing time. Experimental results show that the proposed algorithm can generate high brightness, good contrast and appealing images that surpass the performance of many state-of-the-art tone mapping algorithms. This project is available at https://github.com/jieyang1987/Tone-Mapping-Based-on-Multi-scale-Histogram-Synthesis.'\nauthor:\n- 'Jie Yang [^1]'\n- Ziyi Liu\n- Ulian Shahnovich\n- 'Orly Yadid-Pecht'\nbibliography:\n- 'egbib.bib'\ntitle: 'Tone Mapping Based on Multi-scale Histogram Synthesis'\n---\n\nWide dynamic range image (WDR), tone mapping, local" +"---\nabstract: 'Several robot manipulation tasks are extremely sensitive to variations of the physical properties of the manipulated objects. One such task is manipulating objects by using gravity or arm accelerations, increasing the importance of mass, center of mass, and friction information. We present SwingBot, a robot that is able to learn the physical features of a held object through tactile exploration. Two exploration actions (tilting and shaking) provide the tactile information used to create a physical feature embedding space. With this embedding, SwingBot is able to predict the swing angle achieved by a robot performing dynamic swing-up manipulations on a previously unseen object. Using these predictions, it is able to search for the optimal control parameters for a desired swing-up angle. We show that with the learned physical features our end-to-end self-supervised learning pipeline is able to substantially improve the accuracy of swinging up unseen objects. We also show that objects with similar dynamics are closer to each other on the embedding space and that the embedding can be disentangled into values of specific physical properties.'\nauthor:\n- |\n Chen Wang$^{*1,2}$, Shaoxiong Wang$^{*1}$, Branden Romero$^{1}$, Filipe Veiga$^{1}$ and Edward Adelson$^{1}$\\\n [^1] [^2][^3]\nbibliography:\n- 'references.bib'\ntitle: '**SwingBot: Learning" +"---\nabstract: '> Understanding the meaning of a text is a fundamental challenge of natural language understanding (NLU) research. An ideal NLU system should process a language in a way that is not exclusive to a single task or a dataset. Keeping this in mind, we have introduced a novel knowledge driven semantic representation approach for English text. By leveraging the VerbNet lexicon, we are able to map syntax tree of the text to its commonsense meaning represented using basic knowledge primitives. The general purpose knowledge represented from our approach can be used to build any reasoning based NLU system that can also provide justification. We applied this approach to construct two NLU applications that we present here: SQuARE (Semantic-based Question Answering and Reasoning Engine) and StaCACK (Stateful Conversational Agent using Commonsense Knowledge). Both these systems work by \u201ctruly understanding\u201d the natural language text they process and both provide natural language explanations for their responses while maintaining high accuracy.'\nauthor:\n- |\n Kinjal Basu$^1$, Sarat Varanasi$^1$, Farhad Shakerin$^1$, Joaquin Arias$^2$ and Gopal Gupta$^1$\\\n [ $^1$Department of Computer Science $^2$Artificial Intelligence Research Group]{}\\\n [\u00a0The University of Texas at Dallas, USA Universidad Rey Juan Carlos, Madrid, Spain]{}\nbibliography:\n- 'bibliography.bib'\ntitle:" +"---\nabstract: 'We review the semiclassical two-step model for strong-field ionization. The semiclassical two-step model describes quantum interference and accounts for the ionic potential beyond the semiclassical perturbation theory. We discuss formulation and implementation of this model, its further developments, as well as some of the applications. The reviewed applications of the model include strong-field holography with photoelectrons, multielectron polarization effects in ionization by an intense laser pulse, and strong-field ionization of the hydrogen molecule.'\nauthor:\n- 'N. I. Shvetsov-Shilovski'\ntitle: 'Semiclassical two-step model for ionization by a strong laser pulse: Further developments and applications.'\n---\n\n[example.eps]{}\n\ngsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore\n\n[^1]\n\nIntroduction\n============\n\nStrong-field physics studies phenomena arising from the interaction of strong laser pulses with atoms and molecules. The most well-known examples of these highly nonlinear phenomena are above-threshold ionization\\\n(ATI), formation of the high-energy plateau in the electron energy spectrum (High-order ATI), generation of high-order harmonics (HHG) and nonsequential double ionization (NSDI), see Refs.\u00a0[@DeloneBook2000; @BeckerRev2002; @MilosevicRev2003; @FaisalRev2005; @FariaRev2011] for reviews. Both experimental and theoretical approaches used to analyze these processes are constantly being improved. The vast" +"---\nabstract: 'In this work we present the first steps towards benchmarking isospin symmetry breaking in [*ab initio*]{} nuclear theory for calculations of superallowed Fermi $\\beta$-decay. Using the valence-space in-medium similarity renormalization group, we calculate $b$ and $c$ coefficients of the isobaric multiplet mass equation, starting from two different Hamiltonians constructed from chiral effective field theory. We compare results to experimental measurements for all $T=1$ isobaric analogue triplets of relevance to superallowed $\\beta$-decay for masses $A=10$ to $A=74$ and find an overall agreement within approximately 250\u00a0keV of experimental data for both $b$ and $c$ coefficients. A greater level of accuracy, however, is obtained by a phenomenological Skyrme interaction or a classical charged-sphere estimate. Finally, we show that evolution of the valence-space operator does not meaningfully improve the quality of the coefficients with respect to experimental data, which indicates that higher-order many-body effects are likely not responsible for the observed discrepancies.'\nauthor:\n- 'M.\u00a0S.\u00a0Martin'\n- 'S.\u00a0R.\u00a0Stroberg'\n- 'J.\u00a0D.\u00a0Holt'\n- 'K.\u00a0G.\u00a0Leach'\nbibliography:\n- 'ref.bib'\ntitle: Testing isospin symmetry breaking in ab initio nuclear theory\n---\n\nIntroduction\n============\n\nFundamental Symmetry Tests\n--------------------------\n\nPrecision measurements of superallowed $0^{+}\\to0^{+}$ $\\beta$-decays are a critical tool to search" +"---\nabstract: 'In order to further exploit the potential of joint multi-antenna radar-communication (RadCom) system, we propose two transmission techniques respectively based on separated and shared antenna deployments. Both techniques are designed to maximize weighted sum rate (WSR) and probing power at target\u2019s location under average power constraints at antennas such that the system can simultaneously communicate with downlink users and detect the target within the same frequency band. Based on a Weighted Minimized Mean Square Errors (WMMSE) method, the separated deployment transmission is designed via semidefinite programming (SDP) while the shared deployment problem is solved by majorization-minimization (MM) algorithm. Numerical results show that shared deployment outperforms separated deployment in radar beamforming. The tradeoffs between WSR and probing power at target are compared among both proposed transmissions and two practically simpler dual-function implementations i.e., time division and frequency division. Results show that although separated deployment has an advantage of realizing spectrum sharing, it experiences a performance loss compared with frequency division, while shared deployment outperforms both and surpasses time division in certain conditions.'\naddress:\n- 'College of Electronic Engineering, National University of Defense Technology, Hefei 230037, China'\n- 'Department of Electrical and Electronic Engineering, Imperial College London, London SW7 2AZ," +"---\nabstract: 'In recent years, transformer-based language models have achieved state of the art performance in various NLP benchmarks. These models are able to extract mostly distributional information with some semantics from unstructured text, however it has proven challenging to integrate structured information, such as knowledge graphs into these models. We examine a variety of approaches to integrate structured knowledge into current language models and determine challenges, and possible opportunities to leverage both structured and unstructured information sources. From our survey, we find that there are still opportunities at exploiting adapter-based injections and that it may be possible to further combine various of the explored approaches into one system.'\nauthor:\n- 'Pedro Colon-Hernandez[^1]'\n- Catherine Havasi\n- Jason Alonso\n- Matthew Huggins\n- Cynthia Breazeal\nbibliography:\n- 'bibliography.bib'\ntitle: 'Combining pre-trained language models and structured knowledge'\n---\n\nIntroduction\n============\n\nRecent developments in Language Modeling (LM) techniques have greatly improved the performance of systems in a wide range of Natural Language Processing (NLP) tasks. Many of the current state of the art systems are based on variations to the transformer [@vaswani2017attention] architecture. The transformer architecture, along with modifications such as the Transformer XL [@dai2019transformer] and various training regimes such as the" +"---\nabstract: 'We develop a Bayesian approach to estimate weight matrices in spatial autoregressive (or spatial lag) models. Datasets in regional economic literature are typically characterized by a limited number of time periods $T$ relative to spatial units $N$. When the spatial weight matrix is subject to estimation severe problems of over-parametrization are likely. To make estimation feasible, our approach focusses on spatial weight matrices which are binary prior to row-standardization. We discuss the use of hierarchical priors which impose sparsity in the spatial weight matrix. Monte Carlo simulations show that these priors perform very well where the number of unknown parameters is large relative to the observations. The virtues of our approach are demonstrated using global data from the early phase of the COVID-19 pandemic.'\nauthor:\n- |\n Tam\u00e1s Krisztin[^1]\\\n International Institute for Applied Systems Analysis (IIASA)\\\n and\\\n Philipp Piribauer[^2]\\\n Austrian Institute of Economic Research (WIFO)\ntitle: '**A Bayesian approach for estimation of weight matrices in spatial autoregressive models[^3]** '\n---\n\n\\#1\n\n0\n\n[0]{}\n\n1\n\n[0]{}\n\n[**A Bayesian approach for estimation of weight matrices in spatial autoregressive models**]{}\n\n[*Keywords:*]{} Estimation of spatial weight matrix, spatial econometric model, Bayesian MCMC estimation, Monte Carlo simulations, COVID-19 pandemic\\\n\\\n[*JEL Codes:*]{} C11," +"---\nabstract: |\n The pythagorean fuzzy set (PFS) which is developed based on intuitionistic fuzzy set, is more efficient in elaborating and disposing uncertainties in indeterminate situations, which is a very reason of that PFS is applied in various kinds of fields. How to measure the distance between two pythagorean fuzzy sets is still an open issue. Mnay kinds of methods have been proposed to present the of the question in former reaserches. However, not all of existing methods can accurately manifest differences among pythagorean fuzzy sets and satisfy the property of similarity. And some other kinds of methods neglect the relationship among three variables of pythagorean fuzzy set. To addrees the proplem, a new method of measuring distance is proposed which meets the requirements of axiom of distance measurement and is able to indicate the degree of distinction of PFSs well. Then some numerical examples are offered to to verify that the method of measuring distances can avoid the situation that some counter-intuitive and irrational results are produced and is more effective, reasonable and advanced than other similar methods. Besides, the proposed method of measuring distances between PFSs is applied in a real environment of application which is the" +"---\nabstract: 'Magnetic resonance is a widely-established phenomenon that probes magnetic properties such as magnetic damping and anisotropy. Even though the typical resonance frequency of a magnet ranges from gigahertz to terahertz, experiments also report the resonance near zero frequency in a large class of magnets. Here we revisit this phenomenon by analyzing the symmetry of the system and find that the resonance frequency ($\\omega$) follows a universal power law $\\omega \\varpropto |H-H_c|^p$, where $H_c$ is the critical field at which the resonance frequency is zero. When the magnet preserves the rotational symmetry around the external field ($H$), $p = 1$. Otherwise, $p=1/2$. The magnon excitations are gapped above $H_c$, gapless at $H_c$ and gapped again below $H_c$. The zero frequency is often accompanied by a reorientation transition in the magnetization. For the case that $p=1/2$, this transition is described by a Landau theory for second-order phase transitions. We further show that the spin current driven by thermal gradient and spin-orbit effects can be significantly enhanced when the resonance frequency is close to zero, which can be measured electrically by converting the spin current into electric signals. This may provide an experimentally accessible way to characterize the critical field. Our" +"---\nabstract: 'Many models such as Long Short Term Memory (LSTMs), Gated Recurrent Units (GRUs) and transformers have been developed to classify time series data with the assumption that events in a sequence are ordered. On the other hand, fewer models have been developed for set based inputs, where order does not matter. There are several use cases where data is given as partially-ordered sequences because of the granularity or uncertainty of time stamps. We introduce a novel transformer based model for such prediction tasks, and benchmark against extensions of existing order invariant models. We also discuss how transition probabilities between events in a sequence can be used to improve model performance. We show that the transformer-based equal-time model outperforms extensions of existing set models on three data sets.'\nauthor:\n- 'Stephanie Ger$^1$, Diego Klabjan $^2$, Jean Utke $^3$'\nbibliography:\n- 'equal-time.bib'\ndate: |\n $^1$ Department of Engineering Sciences and Applied Mathematics, Northwestern University, Evanston, Illinois, USA\\\n $^2$ Department of Industrial Engineering and Management Sciences, Northwestern University, Evanston, Illinois, USA\\\n $^3$ Allstate Insurance Company\\\ntitle: Classification Models for Partially Ordered Sequences\n---\n\nIntroduction\n============\n\nWith the development of Recurrent Neural Networks (RNNs), many model architectures such as LSTMs and GRUs" +"---\nabstract: |\n **Abstract**\n\n Twisted bilayer graphene (TBG) develops large [moir\u00e9]{} patterns at small twist angles with flat energy bands hosting domes of superconductivity. The large system size and intricate band structure have however hampered investigations into the superconducting state. Here, using full-scale atomistic modelling with local electronic interactions, we find at and above experimentally relevant temperatures a highly inhomogeneous superconducting state with nematic ordering on both atomic and [moir\u00e9]{} length scales. The nematic state has a locally anisotropic real-valued $d$-wave pairing, with a nematic vector winding throughout the [moir\u00e9]{} pattern, and is three-fold degenerate. Although $d$-wave symmetric, the superconducting state has a full energy gap, which we tie to a $\\pi$-phase interlayer coupling. The superconducting nematicity is further directly detectable in the local density of states. Our results show that atomistic modeling is essential and also that very similar local interactions produce very different superconducting states in TBG and the high-temperature cuprate superconductors.\nauthor:\n- Tomas L\u00f6thman\n- Johann Schmidt\n- Fariborz Parhizgar\n- 'Annica M. Black-Schaffer'\ntitle: ' Nematic superconductivity in magic-angle twisted bilayer graphene from atomistic modeling '\n---\n\nIntroduction\n============\n\nPrecise twist angle and carrier density control have made it possible to map the rich phase" +"---\nabstract: |\n In 1848 Ch.\u00a0Hermite asked if there exists some way to write cubic irrationalities periodically. A little later in order to approach the problem C.G.J.\u00a0Jacobi and O.\u00a0Perron generalized the classical continued fraction algorithm to the three-dimensional case, this algorithm is called now the Jacobi-Perron algorithm. This algorithm is known to provide periodicity only for some cubic irrationalities.\n\n In this paper we introduce two new algorithms in the spirit of Jacobi-Perron algorithm: the heuristic algebraic periodicity detecting algorithm and the $\\sin^2$-algorithm. The heuristic algebraic periodicity detecting algorithm is a very fast and efficient algorithm, its output is periodic for numerous examples of cubic irrationalities, however its periodicity for cubic irrationalities is not proven. The $\\sin^2$-algorithm is limited to the totally-real cubic case (all the roots of cubic polynomials are real numbers). In the recent paper\u00a0[@Karpenkov2021] we proved the periodicity of the $\\sin^2$-algorithm for all cubic totally-real irrationalities. To our best knowledge this is the first Jacobi-Perron type algorithm for which the cubic periodicity is proven. The $\\sin^2$-algorithm provides the answer to Hermite\u2019s problem for the totally real case (let us mention that the case of cubic algebraic numbers with complex conjugate roots remains open).\n\n We" +"---\nabstract: 'This work presents a theory to unify the two independent theoretical frameworks of Kohn-Sham (KS) density functional theory (DFT) and reduced density matrix functional theory (RDMFT). The generalization of the KS orbitals to hypercomplex number systems leads to the hypercomplex KS (HCKS) theory, which extends the search space for the density in KS-DFT to a space that is equivalent to natural spin orbitals with fractional occupations in RDMFT. Thereby, HCKS is able to capture the multi-reference nature of strong correlation by dynamically varying fractional occupations. Moreover, the potential of HCKS to overcome the fundamental limitations of KS is verified on systems with strong correlation, including atoms of transition metals. As a promising alternative to the realization of DFT, HCKS opens up new possibilities for the development and application of DFT in the future.'\nauthor:\n- Neil Qiang Su\nbibliography:\n- 'ref.bib'\ntitle: 'Unity of Kohn-Sham Density Functional Theory and Reduced Density Matrix Functional Theory'\n---\n\n$Introduction.$\u2014Built upon the Hohenberg-Kohn theorem [@HK1964; @Levy1979pnas], Kohn-Sham (KS) density functional theory (DFT) [@KS1965; @PY1989; @Dreizler2012] is a formally exact theoretical framework toward the many-electron problem. Due to the favorable balance between accuracy and efficiency, KS-DFT has won enormous popularity that can manifest" +"---\nabstract: 'Two types of interventions are commonly implemented in networks: characteristic intervention, which influences individuals\u2019 intrinsic incentives, and structural intervention, which targets the social links among individuals. In this paper we provide a general framework to evaluate the distinct equilibrium effects of both types of interventions. We identify a hidden equivalence between a structural intervention and an *endogenously determined* characteristic intervention. Compared with existing approaches in the literature, the perspective from such an equivalence provides several advantages in the analysis of interventions that target network structure. We present a wide range of applications of our theory, including identifying the most wanted criminal(s) in delinquent networks and targeting the key connector for isolated communities. *JEL Classification: D21; D29; D82.* *Keywords:* [Network games; Structural intervention; Katz-Bonacich centrality; Targeting;]{}'\nauthor:\n- 'Yang Sun[^1]'\n- 'Wei Zhao[^2]'\n- 'Junjie Zhou[^3]'\nbibliography:\n- 'bib-intervention.bib'\ntitle: '**Structural Interventions in Networks[^4]** '\n---\n\nIntroduction {#sec:intro}\n============\n\nSocial ties shape economic agents\u2019 decisions in a connected world, ranging from which product to buy for consumers, how much time to spend studying for pupils, how much effort to exert for workers on a team, whether to commit a crime for teenagers, etc.[^5] These social ties, structurally represented as" +"---\nabstract: 'This paper presents a novel virus propagation model using NetLogo. The model allows agents to move across multiple sites using different routes. Routes can be configured, enabled for mobility and (un)locked down independently. Similarly, locations can also be (un)locked down independently. Agents can get infected, propagate their infections to others, can take precautions against infection and also subsequently recover from infection. This model contains certain features that are not present in existing models. The model may be used for educational and research purposes, and the code is made available as open source. This model may also provide a broader framework for more detailed simulations. The results presented are only to demonstrate the model functionalities and do not serve any other purpose.'\nauthor:\n- \nbibliography:\n- 'IEEEabrv.bib'\n- 'citation\\_ref.bib'\ntitle: '[Agent Based Virus Model using NetLogo: Infection Propagation, Precaution, Recovery, Multi-site Mobility and (Un)Lockdown]{}'\n---\n\nagent based model, netlogo, virus, infection, precaution, recovery, mobility, lockdown\n\nIntroduction\n============\n\nAgent based models have been explored for virus spread for a considerable period of time. These models have evolved over time based on the newer requirements. NetLogo [@cite_netlogo_website] is one of the agent based modeling platform that has been used to model" +"---\nabstract: |\n A Diophantine $m$-tuple with elements in the field $K$ is a set of $m$ non-zero (distinct) elements of $K$ with the property that the product of any two distinct elements is one less than a square in $K$. Let $X: (x^2-1)(y^2-1)(z^2-1)=k^2,$ be an affine variety over $K$. Its $K$-rational points parametrize Diophantine triples over $K$ such that the product of the elements of the triple that corresponds to the point $(x,y,z,k)\\in X(K)$ is equal to $k$. We denote by $\\overline{X}$ the projective closure of $X$ and for a fixed $k$ by $X_k$ a variety defined by the same equation as $X$.\n\n In this paper, we try to understand what can the geometry of varieties $X_k$, $X$ and $\\overline{X}$ tell us about the arithmetic of Diophantine triples.\n\n First, we prove that the variety $\\overline{X}$ is birational to $\\mathbb{P}^3$ which leads us to a new rational parametrization of the set of Diophantine triples.\n\n Next, specializing to finite fields, we find a correspondence between a K3 surface $X_k$ for a given $k\\in\\mathbb{F}_{p}^{\\times}$ in the prime field $\\mathbb{F}_{p}$ of odd characteristic and an abelian surface which is a product of two elliptic curves $E_k\\times E_k$ where $E_k: y^2=x(k^2(1 + k^2)^3 +" +"---\nabstract: 'We consider the question of sequential prediction under the log-loss in terms of cumulative regret. Namely, given a hypothesis class of distributions, learner sequentially predicts the (distribution of the) next letter in sequence and its performance is compared to the baseline of the best constant predictor from the hypothesis class. The well-specified case corresponds to an additional assumption that the data-generating distribution belongs to the hypothesis class as well. Here we present results in the more general misspecified case. Due to special properties of the log-loss, the same problem arises in the context of competitive-optimality in density estimation and model selection. For the $d$-dimensional Gaussian location hypothesis class, we show that cumulative regrets in the well-specified and misspecified cases asymptotically coincide. In other words, we provide an $o(1)$ characterization of the distribution-free (or PAC) regret in this case \u2013 the first such result as far as we know. We recall that the worst-case (or individual-sequence) regret in this case is larger by an additive constant ${d\\over 2} + o(1)$. Surprisingly, neither the traditional Bayesian estimators, nor the Shtarkov\u2019s normalized maximum likelihood achieve the PAC regret and our estimator requires special \u201crobustification\u201d against heavy-tailed data. In addition, we show" +"---\nabstract: 'We present a new analytic fitting profile to model the ram pressure exerted over satellite galaxies on different environments and epochs. The profile is built using the information of the gas particle distribution in hydrodynamical simulations of groups and clusters of galaxies to measure the ram pressure directly. We show that predictions obtained by a previously introduced $\\beta$\u2013profile model can not consistently reproduce the dependence of the ram pressure on halocentric distance and redshift for a given halo mass. It features a systematic underestimation of the predicted ram pressure at high redshifts ($z > 1.5$), which increases towards the central regions of the haloes and it is independent of halo mass, reaching differences larger than two decades for satellites at $r<0.4R_\\mathrm{vir}$. This behaviour reverses as redshift decreases, featuring an increasing over\u2013estimation with halocentric distance at $z=0$. As an alternative, we introduce a new universal analytic model for the profiles which can recover the ram pressure dependence on halo mass, halocentric distance and redshift. We analyse the impact of our new profile on galaxy properties by applying a semi-analytic model of galaxy formation and evolution on top of the simulations. We show that galaxies experiencing large amounts of cumulative" +"---\nabstract: 'We characterize dendrites $D$ such that a continuous selfmap of $D$ is generically chaotic (in the sense of Lasota) if and only if it is generically ${\\varepsilon}$-chaotic for some ${\\varepsilon}>0$. In other words, we characterize dendrites on which generic chaos of a continuous map can be described in terms of the behaviour of subdendrites with nonempty interiors under iterates of the map. A dendrite $D$ belongs to this class if and only if it is completely regular, with all points of finite order (that is, if and only if $D$ contains neither a copy of the Riemann dendrite nor a copy of the $\\omega$-star).'\naddress: 'Department of Mathematics, Faculty of Natural Sciences, Matej Bel University, Tajovsk\u00e9ho 40, 974 01 Bansk\u00e1 Bystrica, Slovakia'\nauthor:\n- '[\u013d]{}ubom\u00edr Snoha'\n- Vladim\u00edr \u0160pitalsk\u00fd\n- Michal Tak\u00e1cs\ntitle: Generic chaos on dendrites\n---\n\n[^1]\n\nIntroduction and main results {#S:intro}\n=============================\n\nDuring the last decades many interesting connections between dynamical systems and continuum theory have been studied. To illustrate this, we mention a few results.\n\nHandel [@Ha82] has constructed a $C^\\infty$ area preserving diffeomorphism of the plane with the pseudocircle as a minimal set.\n\nMany authors have been investigating the problem whether various classes" +"---\nabstract: 'Generative neural networks have a well recognized ability to estimate underlying manifold structure of high dimensional data. However, if a single latent space is used, it is not possible to faithfully represent a manifold with topology different from Euclidean space. In this work we define the general class of Atlas Generative Models (AGMs), models with hybrid discrete-continuous latent space that estimate an atlas on the underlying data manifold together with a partition of unity on the data space. We identify existing examples of models from various popular generative paradigms that fit into this class. Due to the atlas interpretation, ideas from non-linear latent space analysis and statistics, e.g. geodesic interpolation, which has previously only been investigated for models with simply connected latent spaces, may be extended to the entire class of AGMs in a natural way. We exemplify this by generalizing an algorithm for graph based geodesic interpolation to the setting of AGMs, and verify its performance experimentally.'\naddress: |\n Department of Computer Science\\\n University of Copenhagen\nauthor:\n- 'Jakob Stolberg-Larsen'\n- Stefan Sommer\nbibliography:\n- 'references.bib'\ntitle: |\n Atlas Generative Models\\\n and Geodesic Interpolation\n---\n\nIntroduction\n============\n\nThe ability of deep generative networks to learn complex features" +"---\nabstract: 'This article brings together two distinct, but related perspectives on playful museum experiences: Critical play and hybrid design. The article explores the challenges involved in combining these two perspectives, through the design of two hybrid museum experiences that aimed to facilitate critical play with/in the collections of the Museum of Yugoslavia and the highly contested heritage they represent. Based on reflections from the design process as well as feedback from test users we describe a series of challenges: Challenging the norms of visitor behaviour, challenging the role of the artefact, and challenging the curatorial authority. In conclusion we outline some possible design strategies to address these challenges.'\nauthor:\n- Anders Sundnes L\u00f8vlie\n- Karin Ryding\n- Jocelyn Spence\n- Paulina Rajkowska\n- Annika Waern\n- Tim Wray\n- Steve Benford\n- William Preston\n- 'Emily Clare-Thorn'\nbibliography:\n- 'references.bib'\ntitle: 'Playing games with Tito: Designing hybrid museum experiences for critical play'\n---\n\n<ccs2012> <concept> <concept\\_id>10003120.10003121.10011748</concept\\_id> <concept\\_desc>Human-centered computing\u00a0Empirical studies in HCI</concept\\_desc> <concept\\_significance>300</concept\\_significance> </concept> </ccs2012>\n\nIntroduction\n============\n\nThere is a growing interest in play and games in museums, causing the museum scholar Jenny Kidd to declare a \u201cludic turn within museums\u201d [@kidd_immersive_2018]. Games have been created for museums for" +"---\nabstract: 'Recently, a class of stationary black hole solutions with non-killing horizon in the asymptotic AdS bulk space (i.e. non-equilibrium black funnel) was constructed to describe the far from equilibrium heat transport and particle transport from the boundary black holes via AdS/CFT correspondence. It is generally believed that the temperature of a black hole with non-killing horizon can not be properly defined by the conventional methods used in the equilibrium black holes with killing horizon. In this study, we calculate the spectrum of Hawking radiation of the non-equilibrium black funnel using the Damour-Ruffini method. Our results indicate that the spectrum and the temperatures as well as the chemical potentials of the non-equilibrium black funnel do depend on one of the spatial coordinates. This is different from the equilibrium black holes with killing horizon, where the temperatures are uniform. Therefore, the black hole with non-killing horizon can be overall in non-equilibrium steady state while the Hawking temperature of the black funnel can be viewed as the local temperature and the corresponding Hawking radiation can be regarded as being in the local equilibrium with the horizon of the black funnel. By AdS/CFT, we discuss some possible implications of our results of" +"---\nabstract: 'Recently, the LHCb Collaboration reported a new structure $P_{cs}(4459)$ with a mass of 19\u00a0MeV below the $\\Xi_c \\bar{D}^{*} $ threshold. It may be a candidate of molecular state from the $\\Xi_c \\bar{D}^{*} $ interaction. In the current work, we perform a coupled-channel study of the $\\Xi_c^*\\bar{D}^*$, $\\Xi''_c\\bar{D}^*$, $\\Xi^*_c\\bar{D}$, $\\Xi_c\\bar{D}^*$, $\\Xi''_c\\bar{D}$, and $\\Xi_c\\bar{D}$ interactions in the quasipotential Bethe-Salpeter equation approach. With the help of the heavy quark chiral effective Lagrangian, the potential is constructed by light meson exchanges. Two $\\Xi_c \\bar{D}^{*} $ molecular states are produced with spin parities $ J^P=1/2^-$ and $3/2^- $. The lower state with $3/2^-$ can be related to the observed $P_{cs}(4450)$ while two-peak structure cannot be excluded. Within the same model, other strange hidden-charm pentaquarks are also predicted. Two states with spin parities $1/2^-$ and $3/2^-$ are predicted near the $\\Xi''_c\\bar{D}$, $\\Xi_c\\bar{D}$, and $\\Xi_c^*\\bar{D}$ thresholds, respectively. As two states near $\\Xi_c \\bar{D}^{*}$ threshold, two states are produced with $1/2^-$ and $3/2^-$ near the $\\Xi''_c\\bar{D}^*$ threshold. The couplings of the molecular states to the considered channels are also discussed. The experimental research of those states are helpful to understand the origin and internal structure of the $P_{cs}$ and $P_c$ states.'\nauthor:\n- 'Jun-Tao Zhu," +"---\nabstract: 'Motivated by the industrial processing of chocolate, we study experimentally the fluidisation of a sessile drop of yield-stress fluid on a pre-existing layer of the same fluid under vertical sinusoidal oscillations. We compare the behaviours of molten chocolate and Carbopol which are both shear-thinning with a similar yield stress but exhibit very different elastic properties. We find that these materials spread when the forcing acceleration exceeds a threshold which is determined by the initial deposition process. However, they exhibit very different spreading behaviours: whereas chocolate exhibits slow long-term spreading, the Carbopol drop rapidly relaxes its stress by spreading to a new equilibrium shape with an enlarged footprint. This spreading is insensitive to the history of the forcing. In addition, the Carbopol drop performs large-amplitude oscillations with the forcing frequency, both above and below the threshold. We investigate these viscoelastic oscillations and provide evidence of complex nonlinear viscoelastic behaviour in the vicinity of the spreading threshold. In fact, for forcing accelerations greater than the spreading threshold, our drop automatically adjusts its shape to remain at the yield stress. We discuss how our vibrated-drop experiment offers a new and powerful approach to probing the yield transition in elastoviscoplastic fluids.'\nauthor:" +"---\nauthor:\n- Rasul Karimov\nbibliography:\n- 'main.bib'\ntitle: CNN with large memory layers\n---\n\nThis work is centred around the recently proposed product key memory structure [@large_memory], implemented for a number of computer vision applications. The memory structure can be regarded as a simple computation primitive suitable to be augmented to nearly all neural network architectures. The memory block allows implementing sparse access to memory with square root complexity scaling with respect to the memory capacity. The latter scaling is possible due to the incorporation of Cartesian product space decomposition of the key space for the nearest neighbour search. We have tested the memory layer on the classification, image reconstruction and relocalization problems and found that for some of those, the memory layers can provide significant speed/accuracy improvement with the high utilization of the key-value elements, while others require more careful fine-tuning and suffer from dying keys. To tackle the later problem we have introduced a simple technique of memory re-initialization which helps us to eliminate unused key-value pairs from the memory and engage them in training again. We have conducted various experiments and got improvements in speed and accuracy for classification and PoseNet relocalization models.\n\nWe showed that" +"---\nabstract: 'We show that space- and time-correlated single-qubit rotation errors can lead to high-weight errors in a quantum circuit when the rotation angles are drawn from heavy-tailed distributions. This leads to a breakdown of , yielding reduced or in some cases no protection of the encoded logical qubits. While heavy-tailed phenomena are prevalent in the natural world, there is very little research as to whether noise with these statistics exist in current quantum processing devices. Furthermore, it is an open problem to develop tomographic or noise spectroscopy protocols that could test for the existence of noise with such statistics. These results suggest the need for quantum characterization methods that can reliably detect or reject the presence of such errors together with continued first-principles studies of the origins of space- and time-correlated noise in quantum processors. If such noise does exist, physical or control-based mitigation protocols must be developed to mitigate this noise as it would severely hinder the performance of fault-tolerant quantum computers.'\nauthor:\n- 'B.D. Clader'\n- 'Colin J. Trout'\n- 'Jeff P. Barnes'\n- Kevin Schultz\n- Gregory Quiroz\n- Paraj Titum\ntitle: 'Impact of correlations and heavy-tails on quantum error correction'\n---\n\nIntroduction {#sec:intro}\n============\n\nThe" +"---\nabstract: 'Lorentzian distributions have been largely employed in statistical mechanics to obtain exact results for heterogeneous systems. Analytic continuation of these results is impossible even for slightly deformed Lorentzian distributions, due to the divergence of all the moments (cumulants). We have solved this problem by introducing a [*pseudo-cumulants\u2019*]{} expansion. This allows us to develop a reduction methodology for heterogeneous spiking neural networks subject to extrinsinc and endogenous noise sources, thus generalizing the mean-field formulation introduced in \\[E. Montbri\u00f3 [*et al.*]{}, Phys. Rev. X 5, 021028 (2015)\\].'\nauthor:\n- 'Denis S.\u00a0Goldobin'\n- Matteo di Volo\n- Alessandro Torcini\ndate:\n- \n- \ntitle: A reduction methodology for fluctuation driven population dynamics\n---\n\n#### Introduction\n\nThe Lorentzian ([*or*]{} Cauchy) distribution (LD) is the second most important stable distribution for statistical physics (after the Gaussian one)\u00a0[@zolotarev1986], which can be expressed in a simple analytic form, i.e. $$L(y)=\\frac{\\pi^{-1}\\Delta}{\\Delta^2+(y-y_0)^2}\n\\label{LD}$$ where $y_0$ is the peak location and $\\Delta$ is the half-width at half-maximum (HWHM). In particular, for a heterogenous system with random variables distributed accordingly to a LD it is possible to estimate exactly the average observables via the residue theorem \u00a0[@yakubovich1969].\n\nThis approach has found large applications in physics, ranging from quantum" +"---\nabstract: 'Lacking enough high quality proposals for RoI box head has impeded two-stage and multi-stage object detectors for a long time, and many previous works try to solve it via improving RPN\u2019s performance or manually generating proposals from ground truth. However, these methods either need huge training and inference costs or bring little improvements. In this paper, we design a novel training method named APDI, which means augmenting proposals by the detector itself and can generate proposals with higher quality. Furthermore, APDI makes it possible to integrate IoU head into RoI box head. And it does not add any hyper-parameter, which is beneficial for future research and downstream tasks. Extensive experiments on COCO dataset show that our method brings at least 2.7 AP improvements on Faster R-CNN with various backbones, and APDI can cooperate with advanced RPNs, such as GA-RPN and Cascade RPN, to obtain extra gains. Furthermore, it brings significant improvements on Cascade R-CNN. [^1]'\nauthor:\n- Anonymous IJCAI submission Paper ID 268\n- Xiaopei Wan$^1$\n- Zhenhua Guo$^2$\n- 'Chao He$^{1}$'\n- Yujiu Yang$^1$\n- |\n Fangbo Tao$^2$ $^1$Tsinghua University\\\n $^2$Alibaba Group\\\n {wxp18, hec18}@mails.tsinghua.edu.cn, yang.yujiu@sz.tsinghua.edu.cn, {mianzhang.gzh, fangbo.tfb}@alibaba-inc.com\nbibliography:\n- 'ijcai21.bib'\ntitle: Augmenting Proposals by the Detector Itself" +"---\nabstract: 'A novel concept of vision-based intelligent control of robotic arms is developed here in this work. This work enables the controlling of robotic arms motion only with visual inputs, that is, controlling by showing the videos of correct movements. This work can broadly be sub-divided into two segments. The first part of this work is to develop an unsupervised vision-based method to control robotic arm in 2-D plane, and the second one is with deep CNN in the same task in 3-D plane. The first method is unsupervised, where our aim is to perform mimicking of human arm motion in real-time by a manipulator. Mimicking, here involves a series of steps, namely, tracking the motion of the arm in videos, estimating motion parameters, and replicating the motion parameters in the robot. We developed a network, namely the vision-to-motion optical network (DON), where the input should be a video stream containing hand movements of human, the the output would be out the velocity and torque information of the hand movements shown in the videos. The output information of the DON is then fed to the robotic arm by enabling it to generate motion according to the real hand videos." +"---\nabstract: |\n A deraining network can be interpreted as a conditional generator that aims at removing rain streaks from image. Most existing image deraining methods ignore model errors caused by uncertainty that reduces embedding quality. Unlike existing image deraining methods that embed low-quality features into the model directly, we replace low-quality features by latent high-quality features. The spirit of closed-loop feedback in the automatic control field is borrowed to obtain latent high-quality features. A new method for error detection and feature compensation is proposed to address model errors. Extensive experiments on benchmark datasets as well as specific real datasets demonstrate that the proposed method outperforms recent state-of-the-art methods. Code is available at:\\\n https://github.com/LI-Hao-SJTU/DerainRLNet\nauthor:\n- |\n Chenghao Chen$^{1}$ and Hao Li$^{*1,2}$\\\n \\\n 1. Department of Automation, Shanghai Jiao Tong University (SJTU), Shanghai, 200240, China.\\\n 2. \u00c9cole d\u2019Ing\u00e9nieurs SJTU-ParisTech (SPEIT), Shanghai, 200240, China.\\\n \\\n [^1] [^2]\nbibliography:\n- 'egbib.bib'\ntitle: Robust Representation Learning with Feedback for Single Image Deraining\n---\n\nIntroduction\n============\n\nOutdoor vision systems are used widely such as on intelligent vehicles and for surveillance. They sometimes suffer from rain pollution, which is undesirable in practice. To handle this problem, study on image deraining has appeared, which aims at" +"---\nabstract: 'TAUKAM stands for \u201cTAUtenburg KAMera\u201d, which will become the new prime-focus imager for the Tautenburg Schmidt telescope. It employs an e2v 6k$\\times$6k CCD and is under manufacture by Spectral Instruments Inc. We describe the design of the instrument and the auxiliary components, its specifications as well as the concept for integrating the device into the telescope infrastructure. First light is foreseen in 2017. TAUKAM will boost the observational capabilities of the telescope for what concerns optical wide-field surveys.'\nauthor:\n- Bringfried Stecklum\n- Jochen Eisl\u00f6ffel\n- Sylvio Klose\n- Uwe Laux\n- Tom L\u00f6winger\n- Helmut Meusinger\n- Michael Pluto\n- Johannes Winkler\n- Frank Dionies\nbibliography:\n- 'main.bib'\ntitle: 'TAUKAM: A 6k$\\,\\times\\,$6k prime-focus camera for the Tautenburg Schmidt Telescope'\n---\n\nINTRODUCTION {#sec:intro}\n============\n\nThe 2-m telescope of the Karl Schwarzschild Observatory, Tautenburg (IAU station code 033) \u2013 which became the Th\u00fcringer Landessternwarte (TLS) after the German re-unification \u2013 was built by Carl Zeiss Jena and went into operation in 1960 [@1961Obs....81...91V]. It is a versatile device which offers three optical configurations (Coude, Nasmyth, Schmidt). The Schmidt mode utilizes the 1.34-m correction plate. In this mode the telescope still represents the largest [*[imaging]{}*]{} Schmidt system with a vignette-free" +"---\nabstract: |\n This paper introduces the 2nd place solution for the Riiid! Answer Correctness Prediction[^1] in Kaggle[^2], the world\u2019s largest data science competition website. This competition was held from October 16, 2020, to January 7, 2021, with 3395 teams and 4387 competitors. The main insights and contributions of this paper are as follows.\\\n (i) We pointed out existing Transformer-based models are suffering from a problem that the information which their query/key/value can contain is limited. To solve this problem, we proposed a method that uses LSTM to obtain query/key/value and verified its effectiveness.\\\n (ii) We pointed out \u2018inter-container\u2019 leakage problem, which happens in datasets where questions are sometimes served together. To solve this problem, we showed special indexing/masking techniques that are useful when using RNN-variants and Transformer.\\\n (iii) We found additional hand-crafted features are effective to overcome the limits of Transformer, which can never consider the samples older than the sequence length.\nauthor:\n- |\n \\\n **Takashi Oya, Shigeo Morishima**\\\n Waseda Research Institute for Science and Engineering\\\n 3-4-1 Okubo, Shinjuku, Tokyo 169-8555, Japan.\\\n oya\\_takashi@ruri.waseda.jp, shigeo@waseda.jp\nbibliography:\n- 'egbib.bib'\ntitle: 'LSTM-SAKT: LSTM-Encoded SAKT-like Transformer for Knowledge Tracing'\n---\n\nIntroduction\n============\n\nThe COVID-19 pandemic from 2020 changed the world overwhelmingly. Specifically," +"---\nabstract: 'Different types of malicious activities have been flagged in multiple permissionless blockchains such as bitcoin, Ethereum etc. While some malicious activities exploit vulnerabilities in the infrastructure of the blockchain, some target its users through social engineering techniques. To address these problems, we aim at automatically flagging blockchain accounts that originate such malicious exploitation of accounts of other participants. To that end, we identify a robust supervised machine learning (ML) algorithm that is resistant to any bias induced by an over representation of certain malicious activity in the available dataset, as well as is robust against adversarial attacks. We find that most of the malicious activities reported thus far, for example, in Ethereum blockchain ecosystem, behaves statistically similar. Further, the previously used ML algorithms for identifying malicious accounts show bias towards a particular malicious activity which is over-represented. In the sequel, we identify that Neural Networks (NN) holds up the best in the face of such bias inducing dataset at the same time being robust against certain adversarial attacks.'\nauthor:\n- |\n [Rachit Agarwal]{}\\\n IIT-Kanpur\n- |\n [Tanmay Thapliyal]{}\\\n IIT-Kanpur\n- |\n [Sandeep K Shukla]{}\\\n IIT-Kanpur\nbibliography:\n- 'biblio.bib'\ntitle: '**[Detecting Malicious Accounts showing Adversarial Behavior in Permissionless Blockchains]{}**'" +"---\nabstract: 'We introduce SynSE, a novel syntactically guided generative approach for Zero-Shot Learning (ZSL). Our end-to-end approach learns progressively refined generative embedding spaces constrained within and across the involved modalities (visual, language). The inter-modal constraints are defined between action sequence embedding and embeddings of Parts of Speech (PoS) tagged words in the corresponding action description. We deploy SynSE for the task of skeleton-based action sequence recognition. Our design choices enable SynSE to generalize compositionally, i.e., recognize sequences whose action descriptions contain words not encountered during training. We also extend our approach to the more challenging Generalized Zero-Shot Learning (GZSL) problem via a confidence-based gating mechanism. We are the first to present zero-shot skeleton action recognition results on the large-scale NTU-60 and NTU-120 skeleton action datasets with multiple splits. Our results demonstrate SynSE\u2019s state of the art performance in both ZSL and GZSL settings compared to strong baselines on the NTU-60 and NTU-120 datasets.'\naddress: |\n Center for Visual Information Technology\\\n IIIT Hyderabad, Hyderabad 500032, INDIA.\\\n `ravi.kiran@iiit.ac.in`\\\n \nbibliography:\n- 'synse.bib'\ntitle: 'SYNTACTICALLY GUIDED GENERATIVE EMBEDDINGS FOR ZERO-SHOT SKELETON ACTION RECOGNITION'\n---\n\nZSL, skeleton action recognition, VAE, deep learning, language and vision\n\nIntroduction {#sec:intro}\n============\n\nAdvances in human action recognition" +"---\nabstract: 'Recommendation systems often use online collaborative filtering (CF) algorithms to identify items a given user likes over time, based on ratings that this user and a large number of other users have provided in the past. This problem has been studied extensively when users\u2019 preferences do not change over time (static case); an assumption that is often violated in practical settings. In this paper, we introduce a novel model for online non-stationary recommendation systems which allows for temporal uncertainties in the users\u2019 preferences. For this model, we propose a user-based CF algorithm, and provide a theoretical analysis of its achievable reward. Compared to related non-stationary multi-armed bandit literature, the main fundamental difficulty in our model lies in the fact that variations in the preferences of a certain user may affect the recommendations for other users severely. We also test our algorithm over real-world datasets, showing its effectiveness in real-world applications. One of the main surprising observations in our experiments is the fact our algorithm outperforms other static algorithms even when preferences do not change over time. This hints toward the general conclusion that in practice, dynamic algorithms, such as the one we propose, might be beneficial even in" +"---\nabstract: 'A Lyapunov-based method is presented for stabilizing and controlling of closed quantum systems. The proposed method is constructed upon a novel quantum Lyapunov function of the system state trajectory tracking error. A positive-definite operator in the Lyapunov function provides additional degrees of freedom for the designer. The stabilization process is analyzed regarding two distinct cases for this operator in terms of its vanishing or non-vanishing commutation with the Hamiltonian operator of the undriven quantum system. To cope with the global phase invariance of quantum states as a result of the quantum projective measurement postulate, equivalence classes of quantum states are defined and used in the proposed Lyapunov-based analysis and design. Results show significant improvement in both the set of stabilizable quantum systems and their invariant sets of state trajectories generated by designed control signals. The proposed method can potentially be applied for high-fidelity quantum control purposes in quantum computing frameworks.'\nauthor:\n- 'Elham\u00a0Jamalinia, Peyman\u00a0Azodi, Alireza\u00a0Khayatian, and\u00a0Peyman\u00a0Setoodeh [^1] [^2] [^3]'\nbibliography:\n- 'IEEEabrv.bib'\n- 'Main.bib'\ntitle: 'Lyapunov-Based Stabilization and Control of Closed Quantum Systems'\n---\n\nQuantum control, Lyapunov theory, Stabilization.\n\nIntroduction {#sec:introduction}\n============\n\nStudying dynamic systems at nano-scale using quantum physics has been one" +"---\nabstract: 'We present Vx2Text, a framework for text generation from multimodal inputs consisting of video plus text, speech, or audio. In order to leverage transformer networks, which have been shown to be effective at modeling language, each modality is first converted into a set of language embeddings by a learnable tokenizer. This allows our approach to perform multimodal fusion in the language space, thus eliminating the need for ad-hoc cross-modal fusion modules. To address the non-differentiability of tokenization on continuous inputs (e.g., video or audio), we utilize a relaxation scheme that enables end-to-end training. Furthermore, unlike prior encoder-only models, our network includes an autoregressive decoder to generate open-ended text from the multimodal embeddings fused by the language encoder. This renders our approach fully generative and makes it directly applicable to different \u201cvideo+$x$ to text\u201d problems without the need to design specialized network heads for each task. The proposed framework is not only conceptually simple but also remarkably effective: experiments demonstrate that our approach based on a single architecture outperforms the state-of-the-art on three video-based text-generation tasks\u2014captioning, question answering and audio-visual scene-aware dialog.'\nauthor:\n- 'Xudong Lin$^{1}$'\n- Gedas Bertasius$^2$\n- Jue Wang$^2$\n- 'Shih-Fu Chang$^{1}$'\n- 'Devi Parikh$^{2,3}$'" +"---\nabstract: 'In heavy atoms and ions, nuclear structure effects are significantly enhanced due to the overlap of the electron wave functions with the nucleus. This overlap rapidly increases with the nuclear charge $Z$. We study the energy level shifts induced by the electric dipole and electric quadrupole nuclear polarization effects in atoms and ions with $Z \\geq 20$. The electric dipole polarization effect is enhanced by the nuclear giant dipole resonance. The electric quadrupole polarization effect is enhanced because the electrons in a heavy atom or ion move faster than the rotation of the deformed nucleus, thus experiencing significant corrections to the conventional approximation in which they \u2018see\u2019 an averaged nuclear charge density. The electric nuclear polarization effects are computed numerically for $1s$, $2s$, $2p_{1/2}$ and high $ns$ electrons. The results are fitted with elementary functions of nuclear parameters (nuclear charge, mass number, nuclear radius and deformation). We construct an effective potential which models the energy level shifts due to nuclear polarization. This effective potential, when added to the nuclear Coulomb interaction, may be used to find energy level shifts in multi-electron ions, atoms and molecules. The fitting functions and effective potentials of the nuclear polarization effects are important" +"---\nabstract: |\n To make good decisions in the real world people need efficient planning strategies because their computational resources are limited. Knowing which planning strategies would work best for people in different situations would be very useful for understanding and improving human decision-making. But our ability to compute those strategies used to be limited to very small and very simple planning tasks. To overcome this computational bottleneck, we introduce a cognitively-inspired reinforcement learning method that can overcome this limitation by exploiting the hierarchical structure of human behavior. The basic idea is to decompose sequential decision problems into two sub-problems: setting a goal and planning how to achieve it. This hierarchical decomposition enables us to discover optimal strategies for human planning in larger and more complex tasks than was previously possible. The discovered strategies outperform existing planning algorithms and achieve a super-human level of computational efficiency. We demonstrate that teaching people to use those strategies significantly improves their performance in sequential decision-making tasks that require planning up to eight steps ahead. By contrast, none of the previous approaches was able to improve human performance on these problems. These findings suggest that our cognitively-informed approach makes it possible to leverage reinforcement" +"---\nabstract: 'The data for many classification problems, such as pattern and speech recognition, follow mixture distributions. To quantify the optimum performance for classification tasks, the Shannon mutual information is a natural information-theoretic metric, as it is directly related to the probability of error. The mutual information between mixture data and the class label does not have an analytical expression, nor any efficient computational algorithms. We introduce a variational upper bound, a lower bound, and three estimators, all employing pair-wise divergences between mixture components. We compare the new bounds and estimators with Monte Carlo stochastic sampling and bounds derived from entropy bounds. To conclude, we evaluate the performance of the bounds and estimators through numerical simulations.'\nauthor:\n- \n- \nbibliography:\n- 'math.bib'\ntitle: Bounds on mutual information of mixture data for classification tasks\n---\n\nMixture distribution, classification, Shannon mutual information, bounds, estimation, mixed-pair\n\nIntroduction\n============\n\nMotivation\n----------\n\nWe study the performance of classification tasks, where the goal is to infer the class label $C$ from sample data $\\bm x$. The Shannon mutual information ${\\mathrm{I}}(\\bm x; C)$ characterizes the reduction in the uncertainty of the class label $C$ with the knowledge of data $\\bm x$ and provides a way to quantify" +"---\nabstract: 'The discrete logarithm problem in a finite group is the basis for many protocols in cryptography. The best general algorithms which solve this problem have time complexity of $\\mathcal{O}(\\sqrt{N}\\log N)$, and a space complexity of $\\mathcal{O}(\\sqrt{N})$ where $N$ is the order of the group. (If $N$ is unknown, a simple modification would achieve a time complexity of $\\mathcal{O}(\\sqrt{N}(\\log N)^2)$.) These algorithms require the inversion of some group elements or rely on finding collisions and the existence of inverses, and thus do not adapt to work in the general semigroup setting. For semigroups, probabilistic algorithms with similar time complexity have been proposed. The main result of this paper is a deterministic algorithm for solving the discrete logarithm problem in a semigroup. Specifically, let $x$ be an element in a semigroup having finite order $N_x$. The paper provides an algorithm, which, given any element $y\\in \\langle x \\rangle $, provides all natural numbers $m$ with $x^m=y$, and has time complexity $O(\\sqrt{N_x}(\\log N_x)^2)$ steps. The paper also gives an analysis of the success rates of the existing probabilistic algorithms, which were so far only conjectured or stated loosely.'\nauthor:\n- Simran Tinani\n- Joachim Rosenthal\nbibliography:\n- 'huge1.bib'\ntitle: |\n A" +"---\nabstract: 'In this work we evaluate the physical acceptability of relativistic anisotropic spheres modeled by two polytropic equations of state -with the same newtonian limit- commonly used to describe compact objects in General Relativity. We integrate numerically the corresponding Lane-Emden equation in order to get density, mass and pressure profiles. An ansatz is used for the anisotropic pressure allowing us to have material configurations slightly deviated from isotropic condition. Numerical models are classified in a parameter space according to the number of physical acceptability conditions that they fulfil. We found that the polytropes considering total energy density are more stable than the second type of polytropic EoS.'\naddress:\n- '$^1$ Escuela de F\u00edsica, Universidad Industrial de Santander, Bucaramanga, Colombia'\n- '$^2$ Departamento de F\u00edsica, Universidad de los Andes, M\u00e9rida, Venezuela'\nauthor:\n- 'Daniel Su\u00e1rez-Urango$^{1}$, Luis A. N\u00fa\u00f1ez$^{1,2}$ and H\u00e9ctor Hern\u00e1ndez$^{1,2}$'\ntitle: 'Relativistic Anisotropic Polytropic Spheres: Physical Acceptability'\n---\n\nIntroduction\n============\n\nThe analysis of the presence and propagation of instabilities in compact objects has been the subject of research for decades. Only those stable configurations can represent real entities of astrophysical interest. Typically, stars are modelled as spherical objects \u2013with gravity as the only binding force\u2013 using structure equations that" +"---\nabstract: |\n **Abstract:** In this paper, we investigate the initial value problem for a Caputo space-time fractional Schr[\" o]{}dinger equation for the delta potential. To solve this equation, we use the joint Laplace transform on the spatial coordinate and the Fourier transform on the time coordinate. Finally, the wave function and the time dependent energy eigenvalues are obtained for a particle which is subjected to the delta potential.\n\n **keywords:** The fractional Schr[\u00f6]{}dinger equation; Caputo space-time fractional derivative\n\n **PACS:** 03.65.Ca, 02.50.Ey, 02.30.Gp, 03.65.Db\nauthor:\n- 'Sepideh Saberhaghparvar[^1] and Hossein Panahi[^2]'\ntitle: 'Initial Value Problem for a Caputo Space-time Fractional Schr[\u00f6]{}dinger Equation for the Delta Potential'\n---\n\n\\[section1\\]Introduction\n========================\n\nThe fractional calculus is a generalization of the usual calculus, so that derivatives and integrals are defined for arbitrary real numbers. In some of the phenomena, the fractional operators simulate phenomena better than ordinary derivatives and normal integrals. The fractional calculus has been used in science and engineering. [@1; @2; @3] Recently the fractional Schr[\" o]{}dinger equation is studied in many fields, such as obstacle problem, phase transition and anomalous diffusion [@4; @5; @6; @7; @8; @9; @10] and etc. The fractional calculus began with Leibniz (1695-1697) and Euler\u2019s speculations (1730). After" +"---\nabstract: 'Neural network (NN) interatomic potentials provide fast prediction of potential energy surfaces, closely matching the accuracy of the electronic structure methods used to produce the training data. However, NN predictions are only reliable within well-learned training domains, and show volatile behavior when extrapolating. Uncertainty quantification approaches can flag atomic configurations for which prediction confidence is low, but arriving at such uncertain regions requires expensive sampling of the NN phase space, often using atomistic simulations. Here, we exploit automatic differentiation to drive atomistic systems towards high-likelihood, high-uncertainty configurations without the need for molecular dynamics simulations. By performing adversarial attacks on an uncertainty metric, informative geometries that expand the training domain of NNs are sampled. When combined to an active learning loop, this approach bootstraps and improves NN potentials while decreasing the number of calls to the ground truth method. This efficiency is demonstrated on sampling of kinetic barriers and collective variables in molecules, and can be extended to any NN potential architecture and materials system.'\nauthor:\n- 'Daniel Schwalbe-Koda'\n- Aik Rui Tan\n- 'Rafael G\u00f3mez-Bombarelli'\ntitle: 'Differentiable sampling of molecular geometries with uncertainty-based adversarial attacks'\n---\n\n[^1]\n\n[^2]\n\nIntroduction\n============\n\nRecent advances in machine learning (ML) techniques have" +"---\nabstract: 'Text classification is the most basic natural language processing task. It has a wide range of applications ranging from sentiment analysis to topic classification. Recently, deep learning approaches based on CNN, LSTM, and Transformers have been the de facto approach for text classification. In this work, we highlight a common issue associated with these approaches. We show that these systems are over-reliant on the important words present in the text that are useful for classification. With limited training data and discriminative training strategy, these approaches tend to ignore the semantic meaning of the sentence and rather just focus on keywords or important n-grams. We propose a simple black box technique ShutText to present the shortcomings of the model and identify the over-reliance of the model on keywords. This involves randomly shuffling the words in a sentence and evaluating the classification accuracy. We see that on common text classification datasets there is very little effect of shuffling and with high probability these models predict the original class. We also evaluate the effect of language model pretraining on these models and try to answer questions around model robustness to out of domain sentences. We show that simple models based on" +"---\nabstract: 'The astropause (heliopause for the Sun) is the tangential discontinuity separating the stellar wind from the interstellar plasma. The global shape of the heliopause is a matter of debates. Two types of the shape are under discussion: comet-like and tube-like. In the second type the two-jets oriented toward the stellar rotation axis are formed by the action of azimuthal component of the stellar magnetic field. We explore a simplified global astrosphere in which: (1) the surrounding and moving with respect to the star circumstellar medium is fully ionized, (2) the interstellar magnetic field is neglected, (3) the radial component of the stellar magnetic field is neglected as compared with the azimuthal component, (4) the stellar wind outflow is spherically symmetric and supersonic. We present the results of numerical 3D MHD modelling and explore how the global structure depends on the gas-dynamic Mach number of the interstellar flow, $M_\\infty$, and the Alfvenic Mach number in the stellar wind. It is shown that the astropause has a tube-like shape for small values of $M_\\infty$. The wings of the tube are distorted toward the tail as larger as larger the Mach number is. The new (to our knowledge) result is the" +"---\nabstract: 'Quantum algorithms for solving the Quantum Linear System (QLS) problem are among the most investigated quantum algorithms of recent times, with potential applications including the solution of computationally intractable differential equations and speed-ups in machine learning. A fundamental parameter governing the efficiency of QLS solvers is $\\kappa$, the condition number of the coefficient matrix $A$, as it has been known since the inception of the QLS problem that for worst-case instances the runtime scales at least linearly in $\\kappa$\u00a0[@HHL]. However, for the case of positive-definite matrices classical algorithms can solve linear systems with a runtime scaling as $\\sqrt{\\kappa}$, a quadratic improvement compared to the the indefinite case. It is then natural to ask whether QLS solvers may hold an analogous improvement. In this work we answer the question in the negative, showing that solving a QLS entails a runtime linear in $\\kappa$ also when $A$ is positive definite. We then identify broad classes of positive-definite QLS where this lower bound can be circumvented and present two new quantum algorithms featuring a quadratic speed-up in $\\kappa$: the first is based on efficiently implementing a matrix-block-encoding of $A^{-1}$, the second constructs a decomposition of the form $A = L" +"---\nabstract: 'Symplectic integrators that preserve the geometric structure of Hamiltonian flows and do not exhibit secular growth in energy errors are suitable for the long-term integration of N-body Hamiltonian systems in the solar system. However, the construction of explicit symplectic integrators is frequently difficult in general relativity because all variables are inseparable. Moreover, even if two analytically integrable splitting parts exist in a relativistic Hamiltonian, all analytical solutions are not explicit functions of proper time. Naturally, implicit symplectic integrators, such as the midpoint rule, are applicable to this case. In general, these integrators are numerically more expensive to solve than same-order explicit symplectic algorithms. To address this issue, we split the Hamiltonian of Schwarzschild space-time geometry into four integrable parts with analytical solutions as explicit functions of proper time. In this manner, second- and fourth-order explicit symplectic integrators can be easily available. The new algorithms are also useful for modeling the chaotic motion of charged particles around a black hole with an external magnetic field. They demonstrate excellent long-term performance in maintaining bounded Hamiltonian errors and saving computational cost when appropriate proper time steps are adopted.'\nauthor:\n- 'Ying Wang$^{1,2}$, Wei Sun$^{1}$, Fuyao Liu$^{1}$, Xin Wu$^{1,2,3,\\dag}$'\ntitle: 'Construction of" +"---\nabstract: '[ We derive a master equation for the reduced density matrix of a uniformly accelerating quantum detector in arbitrary dimensions, generically coupled to a field initially in its vacuum state, and analyze its late time regime. We find that such density matrix asymptotically reaches a Gibbs state. The particularities of its evolution towards this state are encoded in the response function, which depends on the dimension, the properties of the fields, and the specific coupling to them. We also compare this situation with the thermalization of a static detector immersed in a thermal field state, pinpointing the differences between both scenarios. In particular, we analyze the role of the response function and its effect on the evolution of the detector towards equilibrium. Furthermore, we explore the consequences of the well-known statistics inversion of the response function of an Unruh-DeWitt detector linearly coupled to a free scalar field in odd spacetime dimensions. This allows us to specify in which sense accelerated detectors in Minkowski vacuum behave as static detectors in a thermal bath and in which sense they do not. ]{}'\nauthor:\n- Julio Arrechea\n- Carlos Barcel\u00f3\n- 'Luis J. Garay'\n- 'Gerardo Garc\u00eda-Moreno'\nbibliography:\n- 'bunruhf\\_biblio.bib'\ntitle:" +"---\nabstract: 'The Bhatnagar-Gross-Krook (BGK) single-relaxation-time collision model for the Boltzmann equation serves as the foundation of the lattice BGK (LBGK) method developed in recent years. The description of the collision as a uniform relaxation process of the distribution function towards its equilibrium is, in many scenarios, simplistic. Based on a previous series of papers, we present a collision model formulated as independent relaxations of the irreducible components of the Hermit coefficients in the reference frame moving with the fluid. These components, corresponding to the irreducible representation of the rotation group, are the minimum tensor components that can be separately relaxed without violating rotation symmetry. For the 2nd, 3rd and 4th moments respectively, two, two and three independent relaxation rates can exist, giving rise to the shear and bulk viscosity, thermal diffusivity and some high-order relaxation process not explicitly manifested in the Navier-Stokes-Fourier equations. Using the binomial transform, the Hermite coefficients are evaluated in the absolute frame to avoid the numerical dissipation introduced by interpolation. Extensive numerical verification is also provided.'\nauthor:\n- Xiaowen Shan\n- Yangyang Shi\n- Xuhui Li\ntitle: 'A multiple-relaxation-time collision model by Hermite expansion'\n---\n\nIntroduction\n============\n\nA well-known artifact of the Bhatnagar-Gross-Krook (BGK) collision" +"---\nabstract: 'We discuss the possibility of performing precise tests of $\\mu/e$ universality in $B \\to\\pi \\ell^+\\ell^-$ decays. We show that in wide regions of the dilepton invariant mass spectrum the ratio between muonic and electronic decay widths can be predicted with high accuracy, both within and beyond the Standard Model. We present numerical expressions which can be used to extract precise information on short-distance dynamics if a deviation from universality is observed in the data.'\nauthor:\n- Marzia Bordone\n- Claudia Cornella\n- Gino Isidori\n- Matthias K\u00f6nig\nbibliography:\n- 'paper.bib'\ntitle: 'The LFU Ratio R(pi) in the Standard Model and Beyond'\n---\n\nIntroduction {#sec:intro}\n============\n\nThe experimental measurements of the $\\mu/e$ universality ratios $R_K$ and $R_{K^*}$ in $B \\to K^{(*)} \\ell^+\\ell^-$ decays\u00a0[@Aaij:2014ora; @Aaij:2017vbb; @Aaij:2019wad; @Abdesselam:2019wac] indicate a violation of Lepton Flavor Universality (LFU) of about $20\\%$ in the decay rates, well above the Standard Model (SM) expectation\u00a0[@Hiller:2003js; @Bordone:2016gaq]. The statistical significance of each measurement does not exceed the $3\\sigma$ level. However, as pointed out first in [@Hiller:2014yaa], these results are consistent with the tension between data and SM predictions in the $B \\to K^{*} \\ell^+\\ell^-$ differential distribution\u00a0[@Aaij:2020nrf; @Aaij:2015oid], as well as with the suppression of" +"---\nauthor:\n- Leonardo Pierobon\n- 'Robin E. Sch\u00e4ublin'\n- Andr\u00e1s Kov\u00e1cs\n- 'Stephan S. A. Gerstl'\n- Alexander Firlus\n- 'Urs V. Wyss'\n- 'Rafal E. Dunin-Borkowski'\n- Michalis Charilaou\n- 'J\u00f6rg F. L\u00f6ffler'\nbibliography:\n- 'Paper2\\_bibliography.bib'\ntitle: 'Temperature dependence of magnetization processes in Sm(Co,Fe,Cu,Zr)$_z$ magnets with different nanoscale microstructures'\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nSm\u2013(Co, Fe, Cu, Zr)$_z$ ($z = 6.7 - 9.1$) alloys are one of the best commercially available permanent magnets for high-temperature applications, owing to their high Curie temperature and large magnetic coercivity [@ray1992; @gutfleisch2011; @mccallum2014]. Their microstructure, which has been nanoengineered over decades of extensive research [@hadjipanayis1984; @liu1999; @Horiuchi2013; @duerrschnabel2017], forms as a result of a carefully designed aging and heat-treatment process. It comprises Sm$_2$Co$_{17}$ cells, SmCo$_5$ cell walls, and Zr-rich platelets, called the Z phase. The platelets are oriented perpendicular to the $c$-axis of the cells, which is also the easy axis of the magnetocrystalline anisotropy. The intertwined structure of the cells with high saturation magnetization and the cell walls with high magnetic anisotropy significantly improves the magnetic properties and makes them highly tunable [@kronmueller1996; @Skomski2013]. Especially important is the enhancement of coercivity due to domain-wall (DW) pinning, which arises from the difference" +"---\nauthor:\n- 'Nicolas F. Bouch\u00e9'\n- Shy Genel\n- Alisson Pellissier\n- C\u00e9dric Dubois\n- Thierry Contini\n- Beno\u00eet Epinat\n- Annalisa Pillepich\n- Davor Krajnovi\u0107\n- Dylan Nelson\n- 'Valentina Abril-Melgarejo'\n- Johan Richard\n- Leindert Boogaard\n- Michael Maseda\n- Wilfried Mercier\n- Roland Bacon\n- Matthias Steinmetz\n- Mark Vogelsberger\ndate: 'Received\u2014; accepted \u2014'\ntitle: 'The MUSE Hubble Ultra Deep Field Survey XVI. The angular momentum of low-mass star-forming galaxies. A cautionary tale and insights from TNG50[^1]'\n---\n\nIntroduction\n============\n\nIn a $\\Lambda$ cold dark matter ($\\Lambda$CDM) universe, baryons cool, fall inwards, and form centrifugally supported disks in the centers of halos. The specific angular momentum (sAM) sets the pressure, instabilities, gas fractions [@ObreschkowD_16a; @RomeoA_18a; @RomeoA_20b; @LiJ_20a], and most importantly determines the disk properties, such as size [e.g., @WhiteS_78a; @FallM_80a; @FallM_83a; @MoH_98a; @DalcantonJ_97a; @vandenBoschF_03b; @DuttonA_09a; @SomervilleR_17a]. As disks evolve from $z=2$ to the present, they must grow from a vast reservoir of corotating cold accreting material in the circum-galactic medium (CGM) as argued in @RenziniA_20a. This is also strongly supported by hydro-dynamical simulations which predict roughly coplanar gaseous structures [@StewartK_13a; @StewartK_17a; @DanovichM_15a; @HoS_19a; @KretschmerM_20a; @DeFelippisD_21a] embedded in a rotating CGM [@DeFelippisD_20a]. [ These coplanar structures were" +"---\nabstract: 'A pair of triply charmed baryons, $\\Omega_{ccc}\\Omega_{ccc}$, is studied as an ideal dibaryon system by (2+1)-flavor lattice QCD with nearly physical light-quark masses and the relativistic heavy quark action with the physical charm quark mass. The spatial baryon-baryon correlation is related to their scattering parameters on the basis of the HAL QCD method. The $\\Omega_{ccc}\\Omega_{ccc}$ in the ${^1S_0}$ channel taking into account the Coulomb repulsion with the charge form factor of $\\Omega_{ccc}$ leads to the scattering length $a^{\\rm C}_0\\simeq -19~\\text{fm}$ and the effective range $r^{\\rm C}_{\\mathrm{eff}}\\simeq 0.45~\\text{fm}$. The ratio $r^{\\rm C}_{\\mathrm{eff}}/a^{\\rm C}_0 \\simeq -0.024$, whose magnitude is considerably smaller than that of the dineutron ($-0.149$), indicates that $\\Omega_{ccc}\\Omega_{ccc}$ is located in the unitary regime.'\nauthor:\n- Yan Lyu\n- Hui Tong\n- Takuya Sugiura\n- Sinya Aoki\n- |\n \\\n Takumi Doi\n- Tetsuo Hatsuda\n- Jie Meng\n- Takaya Miyamoto\ntitle: Dibaryon with highest charm number near unitarity from lattice QCD\n---\n\n[*Introduction.$-$*]{} Quantum chromodynamics (QCD) is a fundamental theory of strong interaction and governs not only the interaction among quarks and gluons but also the interaction between color-neutral hadrons. In particular, the nucleon-nucleon ($NN$) interaction, which shows a characteristic mid-range attraction and a short-range repulsion, as" +"---\nabstract: '\u00a0Let $\\pi\\in \\Pi(\\mu,\\nu)$ be a coupling between two probability measures $\\mu$ and $\\nu$ on a Polish space. In this article we propose and study a class of nonparametric measures of association between $\\mu$ and $\\nu$, which we call Wasserstein correlation coefficients. These coefficients are based on the Wasserstein distance between $\\nu$ and the disintegration $\\pi_{x_1}$ of $\\pi$ with respect to the first coordinate. We also establish basic statistical properties of this new class of measures: we develop a statistical theory for strongly consistent estimators and determine their convergence rate in the case of compactly supported measures $\\mu$ and $\\nu$. Throughout our analysis we make use of the so-called adapted/bicausal Wasserstein distance, in particular we rely on results established in \\[Backhoff, Bartl, Beiglb\u00f6ck, Wiesel. Estimating processes in adapted Wasserstein distance. 2020\\]. Our approach applies to probability laws on general Polish spaces.'\naddress: 'Johannes WieselColumbia University, Department of Statistics1255 Amsterdam AvenueNew York, NY 10027, USA'\nauthor:\n- Johannes Wiesel\nbibliography:\n- 'bib.bib'\ntitle: Measuring association with Wasserstein distances\n---\n\n[^1]\n\nIntroduction\n============\n\nGiven a sample of $(X_1^1, X_2^1), (X_1^2, X_2^2), \\dots, (X_1^N, X_2^N) $ generated from a probability measure $\\pi$ with marginals $\\mu$ and $\\nu$ on a product $\\mathcal{X}\\times" +"---\nabstract: 'Image quality assessment (IQA) models aim to establish a quantitative relationship between visual images and their perceptual quality by human observers. IQA modeling plays a special bridging role between vision science and engineering practice, both as a test-bed for vision theories and computational biovision models, and as a powerful tool that could potentially make profound impact on a broad range of image processing, computer vision, and computer graphics applications, for design, optimization, and evaluation purposes. IQA research has enjoyed an accelerated growth in the past two decades. Here we present an overview of IQA methods from a Bayesian perspective, with the goals of unifying a wide spectrum of IQA approaches under a common framework and providing useful references to fundamental concepts accessible to vision scientists and image processing practitioners. We discuss the implications of the successes and limitations of modern IQA methods for biological vision and the prospect for vision science to inform the design of future artificial vision systems[^1].'\nauthor:\n- |\n Zhengfang\u00a0Duanmu\\\n University of Waterloo\\\n Waterloo, ON, N2L 3G1\\\n `zduanmu@uwaterloo.ca`\\\n Wentao\u00a0Liu\\\n University of Waterloo\\\n Waterloo, ON, N2L 3G1\\\n `w238liu@uwaterloo.ca`\\\n Zhongling\u00a0Wang\\\n University of Waterloo\\\n Waterloo, ON, N2L 3G1\\\n `zhongling.wang@uwaterloo.ca`\\\n Zhou\u00a0Wang\\\n University of Waterloo\\" +"---\nabstract: |\n We study the inclusive production of bottom-flavored hadrons from semileptonic decays of polarized top quarks at next-to-leading order in QCD using fragmentation functions recently determined from a global fit to $e^+e^-$ data. We provide the relevant differential decay widths at parton level in analytic form. These results fill an important gap in the theoretical interpretation of recent measurements of the top-quark polarization and the $t\\bar{t}$ spin correlations using dilepton final states in proton-proton collisions at the CERN Large Hadron Collider. As an illustration, we study the distributions in the scaled bottom-hadron energy of the polarized-top-quark decay widths for different $W$-boson helicities.\n\n PACS numbers: 12.38.Bx, 13.85.Ni, 14.40.Nd, 14.65.Ha\nauthor:\n- |\n Bernd A. Kniehl[^1]\\\n [II. Institut f\u00fcr Theoretische Physik, Universit\u00e4t Hamburg,]{}\\\n [Luruper Chaussee 149, 22761 Hamburg, Germany]{}\\\n \\\n S. Mohammad Moosavi Nejad[^2]\\\n [Faculty of Physics, Yazd University, P.O. Box 89195\u2013741, Yazd, Iran]{}\ntitle: |\n -3cm\n\n DESY 20\u2013214ISSN 0418-9833\n\n Dezember 2020\n\n 1.5cm Angular analysis of bottom-flavored hadron production in semileptonic decays of polarized top quarks\n---\n\nIntroduction {#sec:one}\n============\n\nThe top quark $t$ of the standard model (SM) is the heaviest known elementary particle. Due to its high mass, it plays a crucial role in testing the electroweak symmetry" +"---\nabstract: 'Current semantic segmentation methods focus only on mining \u201clocal\u201d context, , dependencies between pixels within individual images, by context-aggregation modules (, dilated convolution, neural attention) or structure-aware$_{\\!}$ optimization$_{\\!}$ criteria$_{\\!}$ (, IoU-like$_{\\!}$ loss).$_{\\!}$ However, they ignore \u201cglobal\u201d\u00a0context of the training data, , rich semantic relations between pixels across different images. Inspired by recent advance in unsupervised contrastive representation learning, we propose a pixel-wise contrastive algorithm for semantic segmentation in the fully supervised setting. The core\u00a0idea\u00a0is\u00a0to\u00a0enforce pixel embeddings belonging to a same semantic class\u00a0to\u00a0be\u00a0more similar than embeddings from different classes.\u00a0It raises a [pixel-wise metric learning paradigm for semantic segmentation, by explicitly exploring the structures of labeled pixels]{}, which were rarely explored before. Our method\u00a0can\u00a0be effortlessly incorporated into existing segmentation frameworks without extra overhead during testing. We experimentally show that, with famous segmentation models$_{\\!}$ (,$_{\\!}$ DeepLabV3,$_{\\!}$ HRNet,$_{\\!}$ OCR) and backbones$_{\\!}$ (, ResNet, HRNet), our method brings performance improvements across diverse datasets (, Cityscapes, PASCAL-Context, COCO-Stuff, CamVid). We expect this work will encourage our community to rethink the current de facto training paradigm in semantic segmentation.[^1]'\nauthor:\n- |\n Wenguan Wang$^{1}\\thanks{The first two authors contribute equally to this work.}$\u00a0,\u00a0\u00a0Tianfei" +"---\nauthor:\n- Sudipan Saha and Nasrullah Sheikh\nbibliography:\n- 'ultrasoundClassification.bib'\ntitle: Ultrasound Image Classification using ACGAN with Small Training Dataset\n---\n\nIntroduction {#sec:intro}\n============\n\nUltrasound imaging is a widely used medical imaging modality which is regularly used in clinical applications and many areas of biomedical research. Most popular ultrasound imaging mode is the B (brightness) mode which is performed by sweeping the transmitted ultrasound wave over plane to generate an intensity image.\n\nThe progress in image analysis in the last few years can be ascribed to the accessibility of large labeled data that triggered the development of deep learning algorithms, especially Convolutional Neural Networks (CNNs). To train a CNN, generally considerable amount of labeled data is required. Collection of such large dataset is not difficult for natural (RGB) images. Huge number of images are uploaded everyday in social media. However, acquiring large dataset is challenging in context of medical images, especially for ultrasound images.\n\nIn the applications where only small datasets are available, the CNN based models are often adopted in a setting known as transfer learning [@torrey2010transfer; @saha2019semantic]. In this setting, deep learning based models are trained on large dataset for some task where such dataset is available." +"---\nauthor:\n- Eileen Zhang\ndate: |\n Department of Statistics, University of California, Irvine\\\n congz10@uci.edu \ntitle: ' A Study on the Association between Maternal Childhood Trauma Exposure and Placental-fetal Stress Physiology during Pregnancy'\n---\n\n**Abstract**\n\n**Background** It has been found that the effect of childhood trauma (CT) exposure may pass on to the next generation. Scientists have hypothesized that the association between CT exposure and placental-fetal stress physiology is the mechanism. A study was conducted to examine the hypothesis.\\\n**Method** To examine the association between CT exposure and placental corticotrophin-releasing hormone (pCRH), linear mixed effect model and hierarchical Bayesian linear model were constructed. In Bayesian inference, by providing conditionally conjugate priors, Gibbs sampler was used to draw MCMC samples. Piecewise linear mixed effect model was conducted in order to adjust to the dramatic change of pCRH at around week 20 into pregnancy. Pearson residual, QQ, ACF and trace plots were used to justify the model adequacy. Likelihood ratio test and DIC were utilized to model selection.\\\n**Results** The association between CT exposure and pCRH during pregnancy is obvious. The effect of CT exposure on pCRH varies dramatically over gestational age. Women with one childhood trauma would experience 11.9% higher in" +"---\nabstract: |\n A Schreier decoration is a combinatorial coding of an action of the free group $F_d$ on the vertex set of a $2d$-regular graph. We investigate whether a Schreier decoration exists on various countably infinite transitive graphs as a factor of iid.\n\n We show that $\\mathbb{Z}^d,d\\geq3$, the square lattice and also the three other Archimedean lattices of even degree have finitary-factor-of-iid Schreier decorations, and exhibit examples of transitive graphs of arbitrary even degree in which obtaining such a decoration as a factor of iid is impossible.\n\n We also prove that symmetrical planar lattices with all degrees even have a factor of iid balanced orientation, meaning the indegree of every vertex is equal to its outdegree, and demonstrate that the property of having a factor-of-iid balanced orientation is not invariant under quasi-isometry.\nauthor:\n- 'Ferenc Bencs[^1]'\n- Aranka Hru\u0161kov\u00e1\n- L\u00e1szl\u00f3 M\u00e1rton T\u00f3th\nbibliography:\n- 'hivatkozat.bib'\ntitle: 'Factor-of-iid Schreier decorations of lattices in Euclidean spaces '\n---\n\nIntroduction\n============\n\nLet $G$ be a simple connected $2d$-regular graph. A *Schreier decoration* of $G$ is a colouring of the edges with $d$ colours together with an orientation such that at every vertex, there is exactly one incoming and one outgoing edge" +"---\nabstract: |\n Solid-state or crystal acceleration has for long been regarded as an attractive frontier in advanced particle acceleration. However, experimental investigations of solid-state acceleration mechanisms which offer $\\rm TVm^{-1}$ acceleration gradients have been hampered by several technological constraints. The primary constraint has been the unavailability of attosecond particle or photon sources suitable for excitation of collective modes in bulk crystals. Secondly, there are significant difficulties with direct high-intensity irradiation of bulk solids, such as beam instabilities due to crystal imperfections and collisions etc.\n\n Recent advances in ultrafast technology with the advent of submicron long electron bunches and thin-film compressed attosecond x-ray pulses have now made accessible ultrafast sources that are nearly the same order of magnitude in dimensions and energy density as the scales of collective electron oscillations in crystals. Moreover, nanotechnology enabled growth of crystal tube structures not only mitigates the direct high-intensity irradiation of materials, with the most intense part of the ultrafast source propagating within the tube but also enables a high degree of control over the crystal properties.\n\n In this work, we model an experimentally practicable solid-state acceleration mechanism using collective electron oscillations in crystals that sustain propagating surface waves. These surface waves are" +"---\nabstract: 'Ultra-Reliable Low-Latency Communications have stringent delay constraints, and hence use codes with small block length (short codewords). In these cases, classical models that provide good approximations to systems with infinitely long codewords become imprecise. To remedy this, in this paper, an average coding rate expression is derived for a large-scale network with short codewords using stochastic geometry and the theory of coding in the finite blocklength regime. The average coding rate and upper and lower bounds on the outage probability of the large-scale network are derived, and a tight approximation of the outage probability is presented. Then, simulations are presented to study the effect of network parameters on the average coding rate and the outage probability of the network, which demonstrate that results in the literature derived for the infinite blocklength regime overestimate the network performance, whereas the results in this paper provide a more realistic performance evaluation.'\nauthor:\n- '[^1]'\nbibliography:\n- 'Bibliography.bib'\ntitle: 'On the Performance of Large-Scale Wireless Networks in the Finite Block-Length Regime'\n---\n\nStochastic Geometry; Large-Scale Network; Capacity; Outage Probability; Finite Blocklength; URLLC.\n\nIntroduction\n============\n\nThe density of cellular networks has increased significantly from 2G up to 5G, and continues to increase in" +"---\nauthor:\n- 'Ya\u015far Hi\u00e7yilmaz,[!!]{}'\nbibliography:\n- 'YU\\_NHSSM\\_JHEP.bib'\ntitle: '$ t-b-\\tau $ Yukawa Unification in Non-Holomorphic MSSM'\n---\n\n=1\n\nIntroduction {#sec:intro}\n============\n\nSupersymmetry (SUSY) is a strong Beyond the Standard Model (BSM) theory that has various motivations such as resolution of the gauge hierarchy problem [@Barbieri:1987fn], unification of the gauge couplings [@Georgi:1974sy], radiative electroweak symmetry breaking (REWSB)[@Higgs:1964pj; @Englert:1964et], dark matter candidate under R-parity conservation, etc. However, on the experimental side, there have been not any clues from the SUSY partners of the Standard Model(SM) particles at the LHC. This situation leads to a huge pressure on the SUSY models. In the minimal supersymmetric extension of the SM (MSSM), the discovered 125 GeV SM-like Higgs boson [@Aad:2012tfa; @Chatrchyan:2013lba] gives rise to issues related the fine-tuning problem. Similarly, the LHCb results for the rare decays of B meson [@Amhis:2012bh; @Aaij:2012nna] and the recent Dark Matter (DM) results from the astrophysical experiments [@Aghanim:2018eyx] have a significant impact on the parameter space of the SUSY models such as constrained MSSM (CMSSM) and non-universal Higgs mass models (NUHM) [@Roszkowski:2014wqa]. Moreover, in the MSSM, the experimental results that show significant deviation from SM predictions for the anomalous magnetic moment of muon $ (g \u2212 2)_\\mu $" +"---\nabstract: 'In the framework of the ESA Athena mission, the X-ray Integral Field Unit (X-IFU) instrument to be on board the X-ray Athena Observatory is a cryogenic micro-calorimeter array of Transition Edge Sensor (TES) detectors aimed at providing spatially resolved high-resolution spectroscopy. As a part of the on-board Event Processor (EP), the reconstruction software will provide the energy, spatial location and arrival time of the incoming X-ray photons hitting the detector and inducing current pulses on it. Being the standard optimal filtering technique the chosen baseline reconstruction algorithm, different modifications have been analyzed to process pulses shorter than those considered of high resolution (those where the full length is not available due to a close pulse after them) in order to select the best option based on energy resolution and computing performance results. It can be concluded that the best approach to optimize the energy resolution for short filters is the 0-padding filtering technique, benefiting also from a reduction in the computational resources. However, its high sensitivity to offset fluctuations currently prevents its use as the baseline treatment for the X-IFU application for lack of consolidated information on the actual stability it will get in flight.'\nauthor:\n- Beatriz" +"---\nabstract: 'A wide class of nonuniformly totally polarized beams is introduced that preserve their transverse polarization pattern during paraxial propagation. They are obtained as suitable combinations of Gaussian modes and find applications in polarimetric techniques that use a single input beam for the determination of the Mueller matrix of a homogeneous sample. The class also includes beams that present all possible polarization states across their transverse section (Full-Poincar\u00e9 beams). An example of such beams and its use in polarimetry is discussed in detail. The requirement of invariance of the polarization pattern can be limited to the propagation in the far field, in which case less restrictive conditions are found and a wider class of beams is obtained.'\nauthor:\n- |\n J. C. G. de Sande\\\n ETSIS de Telecomunicaci\u00f3n\\\n Universidad Polit\u00e9cnica de Madrid\\\n Campus Sur 28031 Madrid, Spain\\\n `juancarlos.gonzalez@upm.es`\\\n Gemma Piquero\\\n Departamento de \u00d3ptica\\\n Universidad Complutense de Madrid\\\n Ciudad Universitaria, 28040 Madrid, Spain\\\n `piquero@ucm.es`\\\n Juan Carlos Suarez-Bermejo\\\n Materials Science\\\n Universidad Polit\u00e9cnica de Madrid\\\n Av. de la Memoria 4, 28040 Madrid, Spain\\\n `juancarlos.suarez@upm.es` Massimo Santarsiero\\\n Dipartimento di Ingegneria\\\n Universit\u00e0 Roma Tre\\\n Via V. Volterra 62, 00146 Rome, Italy\\\n `massimo.santarsiero@uniroma3.it`\\\ntitle: 'Beams with propagation-invariant transverse polarization pattern'\n---\n\nPolarimetry is a noninvasive" +"---\nabstract: 'The idea of a social cloud has emerged as a resource sharing paradigm in a social network context. Undoubtedly, state-of-the-art social cloud systems demonstrate the potential of the social cloud acting as complementary to other computing paradigms such as the cloud, grid, peer-to-peer and volunteer computing. However, in this note, we have done a critical survey of the social cloud literature and come to the conclusion that these initial efforts fail to offer a general framework of the social cloud, also, to show the uniqueness of the social cloud. This short note reveals that there are significant differences regarding the concept of social cloud, resource definition, resource sharing and allocation mechanism, and its application and stakeholders. This study is an attempt to express a need for a general framework of the social cloud, which can incorporate various views and resource sharing setups discussed in the literature.'\nauthor:\n- 'Pramod\u00a0C.\u00a0Mane,\u00a0 Kapil\u00a0Ahuja, and\u00a0Pradeep\u00a0Singh,\u00a0 [^1] [^2][^3]'\ntitle: A Critical Note on Social Cloud\n---\n\nsocial-cloud, cloud computing, grid computing, peer-to-peer computing, volunteer computing, network-services.\n\nIntroduction\n============\n\nthe past few years, researchers have shown an increased interest in real-world social relationships to develop the theory of social" +"---\nabstract: |\n In a housing market of Shapley and Scarf [@SS1974], each agent is endowed with one indivisible object and has preferences over all objects. An allocation of the objects is in the (strong) core if there exists no (weakly) blocking coalition. In this paper we show that in the case of strict preferences the unique strong core allocation (or competitive allocation) \u201crespects improvement\u201d: if an agent\u2019s object becomes more attractive for some other agents, then the agent\u2019s allotment in the unique strong core allocation weakly improves. We obtain a general result in case of ties in the preferences and provide new integer programming formulations for computing (strong) core and competitive allocations. Finally, we conduct computer simulations to compare the game-theoretical solutions with maximum size and maximum weight exchanges for markets that resemble the pools of kidney exchange programmes.\n\n [**Keywords:** housing market, respecting improvement, core, competitive allocations, integer programming, kidney exchange programmes.]{}\nauthor:\n- 'P\u00e9ter Bir\u00f3[^1]'\n- 'Flip Klijn[^2]'\n- 'Xenia Klimentova[^3]'\n- 'Ana Viana[^4]'\nbibliography:\n- 'KEPstable-v1.bib'\ntitle: 'Shapley-Scarf Housing Markets: Respecting Improvement, Integer Programming, and Kidney Exchange[^5]'\n---\n\nIntroduction {#sec:intro}\n============\n\nShapley and Scarf [@SS1974] introduced so-called \u201chousing markets\u201d to model trading in commodities that are inherently" +"---\nabstract: 'We give a general construction of triangulations starting from a walk in the quarter plane with small steps, which is a discrete version of the mating of trees. We use a special instance of this construction to give a bijection between maps equipped with a rooted spanning tree and walks in the quarter plane. We also show how the construction allows to recover several known bijections between such objects in a uniform way.'\naddress: 'Institut Gaspard Monge UMR CNRS - 8049 Universit\u00e9 Gustave Eiffel 5 boulevard Descartes, 77454 Champs-Sur-Marne FRANCE'\nauthor:\n- Philippe Biane\ntitle: 'Mating of discrete trees and walks in the quarter-plane'\n---\n\nIntroduction\n============\n\nMating of polynomials originates in complex dynamics, where one can match two Julia sets in order to build a topological sphere or a surface, see e.g. [@BEKMPRT] and\n\nhttps://www.math.univ-toulouse.fr/\u00a0cheritat/MatMovies/\n\nfor nice pictures and movies. This includes in particular the case of Julia sets which are topologically real trees. This construction has been introduced in probability by Le Gall and Paulin [@LP] for studying the topology of the Brownian map and then used, under the name \u201cmating of trees\u201d by Duplantier, Miller and Sheffield [@DMS] in quantum gravity, followed by many" +"---\nabstract: 'Polarisation labelling spectroscopy technique was employed to study the 3$^{1}\\Pi_{u}$ state of Cs$_2$ molecule. The main equlibrium constants are $T_e=20684.60$\u00a0cm$^{-1}$, $\\omega_e=30.61$\u00a0cm$^{-1}$ and $R_e=5.27$\u00a0\u00c5. Vibrational levels $v=4-35$ of the 3$^{1}\\Pi_{u}$ state were found to be subject to strong perturbations by the neighbouring electronic states. Energies of 3094 rovibronic levels of the perturbed complex were determined.'\naddress:\n- 'Institute of Physics, Polish Academy of Sciences, al.\u00a0Lotnik\u00f3w\u00a032/46, 02-668\u00a0Warsaw, Poland'\n- 'Institute of Experimental Physics, Faculty of Physics, University of Warsaw, ul.\u00a0Pasteura\u00a05, 02-093\u00a0Warszawa, Poland'\nauthor:\n- Jacek Szczepkowski\n- Anna Grochola\n- Wlodzimierz Jastrzebski\n- Pawel Kowalczyk\ntitle: 'On the 3$^{1}\\Pi_{u}$ state in caesium dimer'\n---\n\nlaser spectroscopy ,alkali dimers ,electronic states ,perturbations 31.50.Df ,33.20.Kf ,33.20.Vq ,42.62.Fi\n\nIn the present short communication we report experimental investigation of the 3$^{1}\\Pi_{u}$ electronic state of Cs$_2$. Up to now only the lowest vibrational levels $v=0-6$ of this state have been observed\u00a0[@1] and its sole theoretical description comes from an unpublished thesis by Spies\u00a0[@2]. Using the V-type double resonance polarisation labelling spectroscopy (PLS) method we were able to extend experimental observations of the 3$^{1}\\Pi_{u}$ state up to $v=35$. As our experimental method is described in" +"---\nabstract: 'Factorizing speech as disentangled speech representations is vital to achieve highly controllable style transfer in voice conversion (VC). Conventional speech representation learning methods in VC only factorize speech as speaker and content, lacking controllability on other prosody-related factors. State-of-the-art speech representation learning methods for more speech factors are using primary disentangle algorithms such as random resampling and ad-hoc bottleneck layer size adjustment, which however is hard to ensure robust speech representation disentanglement. To increase the robustness of highly controllable style transfer on multiple factors in VC, we propose a disentangled speech representation learning framework based on adversarial learning. Four speech representations characterizing content, timbre, rhythm and pitch are extracted, and further disentangled by an adversarial Mask-And-Predict (MAP) network inspired by BERT. The adversarial network is used to minimize the correlations between the speech representations, by randomly masking and predicting one of the representations from the others. Experimental results show that the proposed framework significantly improves the robustness of VC on multiple factors by increasing the speech quality MOS from 2.79 to 3.30 and decreasing the MCD from 3.89 to 3.58.'\naddress: |\n $^1$ Shenzhen International Graduate School, Tsinghua University, Shenzhen, China\\\n $^2$ Huya Inc., Guangzhou, China\\\n $^3$ The" +"---\nabstract: 'We report the temporal and spectral analysis of three thermonuclear X-ray bursts from [4U\u00a01608$-$52]{}, observed by the Neutron Star Interior Composition Explorer (NICER) during and just after the outburst observed from the source in 2020. In two of the X-ray bursts, we detect secondary peaks, 30 and 18 seconds after the initial peaks. The secondary peaks show a fast rise exponential decay-like shape resembling a thermonuclear X-ray burst. Time-resolved X-ray spectral analysis reveals that the peak flux, blackbody temperature, and apparent emitting radius values of the initial peaks are in agreement with X-ray bursts previously observed from [4U\u00a01608$-$52]{}, while the same values for the secondary peaks tend toward the lower end of the distribution of bursts observed from this source. The third X-ray burst, which happened during much lower accretion rates did not show any evidence for a deviation from an exponential decay and was significantly brighter than the previous bursts. We present the properties of the secondary peaks and discuss the events within the framework of short recurrence time bursts or bursts with secondary peaks. We find that the current observations do not fit in standard scenarios and challenge our understanding of flame spreading.'\nauthor:" +"---\nauthor:\n- |\n John Buczek and Viktor Ivankevych\\\n buczek.j@northeastern.edu\\\n ivankevych.v@northeastern.edu\nbibliography:\n- 'eece7228.bib'\ntitle: Practical Utility PV Multilevel Inverter Solutions\n---\n\n***Abstract\u2013*Multilevel inverters are used to improve power quality and reduce component stresses. This paper describes and compares two multilevel cascaded three phase inverter implementations with two different modulation techniques: Phase Shifted Pulse Width Modulation, and Nearest Level Control. Further analysis will show required number of inverter levels with respect to modulation techniques to provide desired power and power quality to resistive load or grid. Cascaded inverter will be designed and simulated to draw power from PV cells.**\n\nIntroduction\n============\n\nA multi-level inverter is a power electronic system that synthesizes a desired voltage output from several levels of DC voltages as inputs [@pv-citation-1]. Today, there are many different topologies of multilevel converters including, but not limited to, Diode-Clamped, Flying Capacitor, and Cascade H-bridge (CHB). While the topologies may be different, they all offer similar beneficial features. For sinusoidal outputs, multilevel converters improve their output voltage in quality as the number of levels of the converter increase, thus decreasing the Total Harmonic Distortion (THD) [@Kouro]. For this reason and others, multilevel converters have been used for high power photovoltaic (PV)" +"---\nabstract: 'We introduce the *Collatz conjecture* and its history. Some definition that this conjecture has, will be expressed and with these we try to explain some good lemma to justify the main properties of the *Collatz conjecture*. With these lemmas a situation made to go through to write some mainly properties of Collatz graph on $\\mathbb{Z}/10\\mathbb{Z}$. Then with something provided it will go to draw the graph on $\\mathbb{Z}/10\\mathbb{Z}$.'\nauthor:\n- 'Benyamin Khanzadeh H.'\ntitle: 'Collatz mapping on $\\Z/10\\Z$'\n---\n\nIntroduction\n============\n\nThe *Collatz conjecture*, also known as the \u201c$3x+1$\u201d problem, is a conjecture in mathematics that concerns sequences of integer numbers. These sequences start with an arbitrary positive integer, and so each term is obtained from the previous one as follows: if the previous term is even, the next term is one half of this one, and if the previous term is odd, the next term is $3$ times the previous one plus $1$. The conjecture says that no matter what is the starting value, the sequence will always reach $1$. [@A]\n\nMore specifically, consider the *Collatz function* as the map ${\\operatorname{Col}}:{\\mathbb{N}}\\to{\\mathbb{N}}$, defined by:$${\\operatorname{Col}}(x)=\\left\\{\\begin{array}{ll}\n\\frac{x}{2}&\\text{if }x\\equiv0\\mod2\\\\\n3x+1&\\text{if }x\\equiv1\\mod2\n\\end{array}\\right.$$Thus, the Collatz conjecture says that, for every $x\\in{\\mathbb{N}}$, there" +"---\nabstract: 'Quantifying tail dependence is an important issue in insurance and risk management. The prevalent tail dependence coefficient (TDC), however, is known to underestimate the degree of tail dependence and it does not capture non-exchangeable tail dependence since it evaluates the limiting tail probability only along the main diagonal. To overcome these issues, two novel tail dependence measures called the maximal tail concordance measure (MTCM) and the average tail concordance measure (ATCM) are proposed. Both measures are constructed based on tail copulas and possess clear probabilistic interpretations in that the MTCM evaluates the largest limiting probability among all comparable rectangles in the tail, and the ATCM is a normalized average of these limiting probabilities. In contrast to the TDC, the proposed measures can capture non-exchangeable tail dependence. Analytical forms of the proposed measures are also derived for various copulas. A real data analysis reveals striking tail dependence and tail non-exchangeability of the return series of stock indices, particularly in periods of financial distress.'\nauthor:\n- 'Takaaki Koike[^1]'\n- Shogo Kato\n- Marius Hofert\ntitle: 'Measuring non-exchangeable tail dependence using tail copulas'\n---\n\n\\\n*Keywords:* Copula; tail copula; tail dependence; tail dependence coefficient; tail dependence function; tail non-exchangeability\\\n*JEL codes:*" +"---\nabstract: 'In online advertising, recommender systems try to propose items from a list of products to potential customers according to their interests. Such systems have been increasingly deployed in E-commerce due to the rapid growth of information technology and availability of large datasets. The ever-increasing progress in the field of artificial intelligence has provided powerful tools for dealing with such real-life problems. Deep reinforcement learning (RL) that deploys deep neural networks as universal function approximators can be viewed as a valid approach for design and implementation of recommender systems. This paper provides a comparative study between value-based and policy-based deep RL algorithms for designing recommender systems for online advertising. The RecoGym environment is adopted for training these RL-based recommender systems, where the long short term memory (LSTM) is deployed to build value and policy networks in these two approaches, respectively. LSTM is used to take account of the key role that order plays in the sequence of item observations by users. The designed recommender systems aim at maximising the click-through rate (CTR) for the recommended items. Finally, guidelines are provided for choosing proper RL algorithms for different scenarios that the recommender system is expected to handle.'\nauthor:\n- 'Milad" +"---\nabstract: 'Kernel methods have been among the most popular techniques in machine learning, where learning tasks are solved using the property of reproducing kernel Hilbert space (RKHS). In this paper, we propose a novel data analysis framework with reproducing kernel Hilbert $C^*$-module (RKHM) and kernel mean embedding (KME) in RKHM. Since RKHM contains richer information than RKHS or vector-valued RKHS (vvRKHS), analysis with RKHM enables us to capture and extract structural properties in such as functional data. We show a branch of theories for RKHM to apply to data analysis, including the representer theorem, and the injectivity and universality of the proposed KME. We also show RKHM generalizes RKHS and vvRKHS. Then, we provide concrete procedures for employing RKHM and the proposed KME to data analysis.'\nauthor:\n- |\n Yuka Hashimoto yuka.hashimoto.rw@hco.ntt.co.jp\\\n NTT Network Service Systems Laboratories, NTT Corporation\\\n 3-9-11, Midori-cho, Musashinoshi, Tokyo, 180-8585, Japan /\\\n Graduate School of Science and Technology, Keio University\\\n 3-14-1, Hiyoshi, Kohoku, Yokohama, Kanagawa, 223-8522, Japan Isao Ishikawa ishikawa.isao.zx@ehime-u.ac.jp\\\n Center for Data Science, Ehime University\\\n 2-5, Bunkyo-cho, Matsuyama, Ehime, 790-8577, Japan /\\\n Center for Advanced Intelligence Project, RIKEN\\\n 1-4-1, Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan Masahiro Ikeda masahiro.ikeda@riken.jp\\\n Center for Advanced Intelligence Project, RIKEN\\\n 1-4-1," +"---\nabstract: |\n Urban evolution processes occur at different scales, with intricate interactions between levels and relatively distinct type of processes. To what extent actual urban dynamics include an actual strong coupling between scales, in the sense of both top-down and bottom-up feedbacks, remains an open issue with important practical implications for the sustainable management of territories. We introduce in this paper a multi-scalar simulation model of urban growth, coupling a system of cities interaction model at the macroscopic scale with morphogenesis models for the evolution of urban form at the scale of metropolitan areas. Strong coupling between scales is achieved through an update of model parameters at each scale depending on trajectories at the other scale. The model is applied and explored on synthetic systems of cities. Simulation results show a non-trivial effect of the strong coupling. As a consequence, an optimal action on policy parameters such as containing urban sprawl is shifted. We also run a multi-objective optimization algorithm on the model, showing showing that compromise between scales are captured. Our approach opens new research directions towards more operational urban dynamics models including a strong feedback between scales.\\\n **Keywords:** Urban dynamics; Systems of cities; Urban morphogenesis; Multi-scalar modeling;" +"---\nabstract: 'Making the gradients small is a fundamental optimization problem that has eluded unifying and simple convergence arguments in first-order optimization, so far primarily reserved for other convergence criteria, such as reducing the optimality gap. We introduce a novel potential function-based framework to study the convergence of standard methods for making the gradients small in smooth convex optimization and convex-concave min-max optimization. Our framework is intuitive and it provides a lens for viewing algorithms that make the gradients small as being driven by a trade-off between reducing either the gradient norm or a certain notion of an optimality gap. On the lower bounds side, we discuss tightness of the obtained convergence results for the convex setup and provide a new lower bound for minimizing norm of cocoercive operators that allows us to argue about optimality of methods in the min-max setup.'\nauthor:\n- |\n Jelena Diakonikolas\\\n Department of Computer Sciences\\\n University of Wisconsin-Madison\\\n `jelena@cs.wisc.edu`\n- |\n Puqian Wang\\\n School of Mathematics\\\n Shandong University\\\n `e.puqian.wang@gmail.com`\nbibliography:\n- 'references.bib'\ntitle: 'Potential Function-based Framework for Making the Gradients Small in Convex and Min-Max Optimization[^1]'\n---\n\ngradient minimization, convergence analysis, potential function\n\n90C06, 90C25, 65K05\n\nIntroduction\n============\n\nOne of the most basic facts" +"---\nabstract: 'We apply the ideas of effective field theory to nonrelativistic quantum mechanics. Utilizing an artificial boundary of ignorance as a calculational tool, we develop the effective theory using boundary conditions to encode short-ranged effects that are deliberately not modeled; thus, the boundary conditions play a role similar to the effective action in field theory. Unitarity is temporarily violated in this method, but is preserved on average. As a demonstration of this approach, we consider the Coulomb interaction and find that this effective quantum mechanics can predict the bound state energies to very high accuracy with a small number of fitting parameters. It is also shown to be equivalent to the theory of quantum defects, but derived here using an *effective* framework. The method respects electromagnetic gauge invariance and also can describe decays due to short-ranged interactions, such as those found in positronium. Effective quantum mechanics appears applicable for systems that admit analytic long-range descriptions, but whose short-ranged effects are not reliably or efficiently modeled. Potential applications of this approach include atomic and condensed matter systems, but it may also provide a useful perspective for the study of blackholes.'\nauthor:\n- 'David M. Jacobs'\n- Matthew Jankowski\nbibliography:\n-" +"---\nabstract: 'Quantum mass function has been applied in lots of fields because of its efficiency and validity of managing uncertainties in the form of quantum which can be regarded as an extension of classical Dempster\u2010Shafer (D\u2010S) evidence theory. However, how to handle uncertainties in the form of quantum is still an open issue. In this paper, a new method is proposed to dispose uncertain quantum information, which is called two-dimensional quantum mass function (TDQMF). A TDQMF is consist of two elements, $TQ = (\\mathbb{Q}_{original},\\mathbb{Q}_{indicative})$, both of the $\\mathbb{Q}s$ are quantum mass functions, in which the $\\mathbb{Q}_{indicative}$ is an indicator of the reliability on $\\mathbb{Q}_{original}$. More flexibility and effectiveness are offered in handling uncertainty in the field of quantum by the proposed method compared with primary quantum mass function. Besides, some numerical examples are provided and some practical applications are given to verify its correctness and validity.'\naddress: 'School of Computer and Information Science, Southwest University, Chongqing, 400715, China'\nauthor:\n- Yuanpeng He\n- Fuyuan Xiao\nbibliography:\n- 'cite.bib'\ntitle: 'TDQMF: Two-dimensional quantum mass function'\n---\n\nQuantum mass function \u00a0\u00a0 Dempster-Shafer evidence theory \u00a0\u00a0 Uncertainties \u00a0\u00a0Two-dimension\n\nIntroduction\n============\n\nIt is unavoidable to deal with indeterminate information in practical situations [@Deng2020ScienceChina; @Seiti2018; @MURPHY20001;" +"---\nabstract: 'This paper presents a novel hierarchical deep reinforcement learning (DRL) based design for the voltage control of power grids. DRL agents are trained for fast, and adaptive selection of control actions such that the voltage recovery criterion can be met following disturbances. Existing voltage control techniques suffer from the issues of speed of operation, optimal coordination between different locations, and scalability. We exploit the area-wise division structure of the power system to propose a hierarchical DRL design that can be scaled to the larger grid models. We employ an enhanced augmented random search algorithm that is tailored for the voltage control problem in a two-level architecture. We train area-wise decentralized RL agents to compute lower-level policies for the individual areas, and concurrently train a higher-level DRL agent that uses the updates of the lower-level policies to efficiently coordinate the control actions taken by the lower-level agents. Numerical experiments on the IEEE benchmark $39-$bus model with $3$ areas demonstrate the advantages and various intricacies of the proposed hierarchical approach.'\nauthor:\n- |\n Sayak Mukherjee,\u00a0 Renke Huang,\u00a0\\\n Qiuhua Huang,\u00a0, Thanh Long Vu,\u00a0, and Tianzhixi Yin [^1] [^2]\nbibliography:\n- 'HDRL4LS.bib'\ntitle: 'Scalable Voltage Control using Structure-Driven Hierarchical" +"---\nabstract: |\n Quantum cryptography is known for enabling functionalities that are unattainable using classical information alone. Recently, *Secure Software Leasing (SSL)* has emerged as one of these areas of interest. Given a target circuit\u00a0$C$ from a circuit class, SSL produces an encoding of\u00a0$C$ that enables a recipient to evaluate $C$, and also enables the originator of the software to *verify* that the software has been *returned* \u2014 meaning that the recipient has relinquished the possibility of any further use of the software. Clearly, such a functionality is unachievable using classical information alone, since it is impossible to prevent a user from keeping a copy of the software. Recent results have shown the achievability of SSL using quantum information for a class of functions called *compute-and-compare* (these are a generalization of the well-known *point functions*). These prior works, however all make use of setup or computational assumptions. Here, we show that SSL is achievable for compute-and-compare circuits *without any assumptions*.\n\n Our technique involves the study of *quantum copy-protection*, which is a notion related to\u00a0SSL, but where the encoding procedure inherently *prevents* a would-be quantum software pirate from *splitting* a single copy of an encoding for\u00a0$C$ into" +"---\nabstract: |\n In this paper, we construct a bound copula, which can reach both Frechet\u2019s lower and upper bounds for perfect positive and negative dependence cases. Since it covers a wide range of dependency and simple for computational purposes, it can be very useful. We then develop a new perturbed copula using the lower and upper bounds of Frechet copula and show that it satisfies all properties of a copula. In some cases, it is very difficult to get results such as distribution functions and the expected values in explicit form by using copulas such as Archemedes, Guassian, $t$-copula. Thus, we can use these new copulas. For both copulas, we derive the strength of measures of the dependency such as Spearman\u2019s rho, Kendall\u2019s tau, Blomqvist\u2019s beta and Gini\u2019s gamma, and the coefficients of the tail dependency. As an application, we use the bound copula to analyze the dependency between two service times to evaluate the mean waiting time and the mean service time when customers launch two replicas of each task on two parallel servers using the cancel-on-finish policy. We assume that the inter-arrival time is exponential and the service time is general.\\\n **Keywords:** Bound copula; Frechet\u2019s upper and" +"---\nauthor:\n- 'B. Popescu Braileanu [^1]'\n- 'V. S. Lukin'\n- 'E. Khomenko'\n- '\u00c1. de Vicente'\ndate: 'Received 2021; Accepted XXXX'\ntitle: 'Two-fluid simulations of Rayleigh-Taylor instability in a magnetized solar prominence thread II. Effects of collisionality.'\n---\n\nIntroduction\n============\n\nSolar prominences consist of plasma confined in large-scale magnetic structures. The prominence plasma is usually [ about two orders of magnitude]{} cooler and denser than the surrounding corona, and it is believed to have chromospheric origin [@2010Labrosse]. Therefore, in a prominence, plasma is only partially ionized, and the timescale associated with collisions between neutrals and ions can be comparable to hydrodynamic timescales. [ There have been a number of observational studies looking for indications of decoupling in velocity between ions and neutrals [@2007Gilbert; @2016Khomenko; @2017Anan].]{}\n\nThe Rayleigh-Taylor instability (RTI) occurs when a heavier fluid is accelerated against a lighter fluid. In a two-dimensional geometry, a small perturbation at the interface between the heavy and light fluids that varies in the direction perpendicular to the direction of the gravity may grow without bounds, depending on the scale of the perturbation. It is well known that within the ideal magnetohydrodynamics (MHD) model a discontinuous interface between heavy and light fluids" +"---\nabstract: |\n A key task of data science is to identify relevant features linked to certain output variables that are supposed to be modeled or predicted. To obtain a small but meaningful model, it is important to find stochastically independent variables capturing all the information necessary to model or predict the output variables sufficiently. Therefore, we introduce in this work a framework to detect linear and non-linear dependencies between different features. As we will show, features that are actually functions of other features do not represent further information. Consequently, a model reduction neglecting such features conserves the relevant information, reduces noise and thus improves the quality of the model. Furthermore, a smaller model makes it easier to adopt a model of a given system. In addition, the approach structures dependencies within all the considered features. This provides advantages for classical modeling starting from regression ranging to differential equations and for machine learning.\n\n To show the generality and applicability of the presented framework 2154 features of a data center are measured and a model for classification for faulty and non-faulty states of the data center is set up. This number of features is automatically reduced by the framework to 161" +"---\nabstract: 'Voice cloning is the task of learning to synthesize the voice of an unseen speaker from a few samples. While current voice cloning methods achieve promising results in Text-to-Speech (TTS) synthesis for a new voice, these approaches lack the ability to control the expressiveness of synthesized audio. In this work, we propose a controllable voice cloning method that allows fine-grained control over various style aspects of the synthesized speech for an unseen speaker. We achieve this by explicitly conditioning the speech synthesis model on a speaker encoding, pitch contour and latent style tokens during training. Through both quantitative and qualitative evaluations, we show that our framework can be used for various expressive voice cloning tasks using only a few transcribed or untranscribed speech samples for a new speaker. These cloning tasks include style transfer from a reference speech, synthesizing speech directly from text, and fine-grained style control by manipulating the style conditioning variables during inference. [^1]'\nauthor:\n- |\n **\\***Paarth Neekhara, **\\***Shehzeen Hussain,\\\n Shlomo Dubnov, Farinaz Koushanfar, Julian McAuley\\\n University of California San Diego\\\n **\\***Equal contribution\\\nbibliography:\n- 'myref.bib'\ntitle: Expressive Neural Voice Cloning\n---\n\nIntroduction\n============\n\nRecent research efforts in voice cloning have focused on synthesizing a" +"---\nabstract: 'In this paper, we describe our experience incorporating gradual types in a statically typed functional language with Hindley-Milner style type inference. Where most gradually typed systems aim to improve static checking in a dynamically typed language, we approach it from the opposite perspective and promote dynamic checking in a statically typed language. Our approach provides a glimpse into how languages like SML and OCaml might handle gradual typing. We discuss our implementation and challenges faced\u2014specifically how gradual typing rules apply to our representation of composite and recursive types. We review the various implementations that add dynamic typing to a statically typed language in order to highlight the different ways of mixing static and dynamic typing and examine possible inspirations while maintaining the gradual nature of our type system. This paper also discusses our motivation for adding gradual types to our language, and the practical benefits of doing so in our industrial setting.'\nauthor:\n- Bhargav Shivkumar\n- Enrique Naudon\n- Lukasz Ziarek\nbibliography:\n- 'main.bib'\ntitle: Putting gradual types to work\n---\n\nIntroduction {#sec:intro}\n============\n\nStatic typing and dynamic typing are two opposing type system paradigms. Statically typed languages are able to catch more programmer bugs early in" +"---\nabstract: 'Recent progress in high-dispersion spectroscopy has revealed the presence of vaporized heavy metals and ions in the atmosphere of hot Jupiters whose dayside temperature is larger than 2000 K, categorized as ultra hot Jupiters (UHJs). Using the archival data of high resolution transmission spectroscopy obtained with the Subaru telescope, we searched for neutral metals in HD149026b, a hot Jupiter cooler than UHJs. By removing stellar and telluric absorption and using a cross-correlation technique, we report tentative detection of neutral titanium with 4.4 $\\sigma$ and a marginal signal of neutral iron with 2.8 $\\sigma$ in the atmosphere. This is the first detection of neutral titanium in an exoplanetary atmosphere. In this temperature range, titanium tends to form titanium oxide (TiO). The fact that we did not detect any signal from TiO suggests that the C/O ratio in the atmosphere is higher than the solar value. The detection of metals in the atmosphere of hot Jupiters cooler than UHJs will be useful for understanding the atmospheric structure and formation history of hot Jupiters.'\nauthor:\n- Masato Ishizuka\n- Hajime Kawahara\n- 'Stevanus K. Nugroho'\n- Yui Kawashima\n- Teruyuki Hirano\n- Motohide Tamura\nbibliography:\n- 'sample63.bib'\ntitle: Neutral metals in" +"---\nabstract: 'In this paper, we propose a fast method for simultaneous reconstruction and segmentation (SRS) in X-ray computed tomography (CT). Our work is based on the SRS model where Bayes\u2019 rule and the maximum a posteriori (MAP) are used on hidden Markov measure field model (HMMFM). The original method leads to a logarithmic-summation (log-sum) term, which is non-separable to the classification index. The minimization problem in the model was solved by using constrained gradient descend method, Frank-Wolfe algorithm, which is very time-consuming especially when dealing with large-scale CT problems. The starting point of this paper is the commutativity of log-sum operations, where the log-sum problem could be transformed into a sum-log problem by introducing an auxiliary variable. The corresponding sum-log problem for the SRS model is separable. After applying alternating minimization method, this problem turns into several easy-to-solve convex sub-problems. In the paper, we also study an improved model by adding Tikhonov regularization, and give some convergence results. Experimental results demonstrate that the proposed algorithms could produce comparable results with the original SRS method with much less CPU time.'\nauthor:\n- \nbibliography:\n- 'bibtex.bib'\ntitle: 'A fast method for simultaneous reconstruction and segmentation in X-ray CT application'\n---\n\nSimultaneous" +"---\nabstract: 'We present a concept of a tunable optical excitation of spin waves and filtering their spectra in a ferromagnetic film with 180$^{\\circ}$ N\u00e9el domain wall. We show by means of micromagnetic simulation that the fluence of the femtosecond laser pulse and its position with respect to the domain wall affect the frequencies of the excited spin waves, and the presence of the domain wall plays crucial role in control of the spin waves\u2019 spectrum. The predicted effects are understood by analyzing the changes of the spin waves\u2019 dispersion under the impact of the laser pulse.'\nauthor:\n- 'N.E. Khokhlov'\n- 'A.E. Khramova'\n- 'Ia.A. Filatov'\n- 'P.I. Gerevenkov'\n- 'B.A. Klinskaya'\n- 'A.M. Kalashnikova'\ntitle: N\u00e9el domain wall as a tunable filter for optically excited magnetostatic waves\n---\n\nIntroduction\n============\n\nIn magnonics, spin waves (SW) are used to implement alternative methods of transferring information in magnetic nanostructures that can replace traditional transistor circuits [@Lenk-PhysRep2011; @Nikitov:UFN2015; @ChumakNPhys:2015; @Mahmoud_JAP_2020_Intro_to_SW_computing]. Unlike electric charges, SW can propagate in materials even without free charge carriers [@Kajiwara_Nature2010:Transmission_SW_in_YIG; @hou_spin_antiferromagnets_2019_NPGAsia]. Thus, SW propagation is not associated with Joule losses which reduction is the challenging problem in traditional electronics. Different types of magnetic ordering support SW with" +"---\nabstract: 'We present and analyze a stochastic distributed method (S-NEAR-DGD) that can tolerate inexact computation and inaccurate information exchange to alleviate the problems of costly gradient evaluations and bandwidth-limited communication in large-scale systems. Our method is based on a class of flexible, distributed first order algorithms that allow for the trade-off of computation and communication to best accommodate the application setting. We assume that all the information exchange between nodes is subject to random distortion and that only stochastic approximations of the true gradients are available. Our theoretical results prove that the proposed algorithm converges linearly in expectation to a neighborhood of the optimal solution for strongly convex objective functions with Lipschitz gradients. We characterize the dependence of this neighborhood on algorithm and network parameters, the quality of the communication channel and the precision of the stochastic gradient approximations used. Finally, we provide numerical results to evaluate the empirical performance of our method.'\nauthor:\n- 'Charikleia\u00a0Iakovidou, Ermin\u00a0Wei [^1]'\nbibliography:\n- 'bibtex/prospectus.bib'\ntitle: 'S-NEAR-DGD: A Flexible Distributed Stochastic Gradient Method for Inexact Communication'\n---\n\nIntroduction\n============\n\nThe study of distributed optimization algorithms has been an area of intensive research for more than three decades. The need to harness" +"---\nauthor:\n- Jo\u00e3o Barata\n- Fabio Dom\u00ednguez\n- 'Carlos A. Salgado'\n- V\u00edctor Vila\nbibliography:\n- 'Lib.bib'\ntitle: 'A modified in-medium evolution equation with color coherence'\n---\n\nIntroduction {#sec:intro}\n============\n\nOne of the strongest evidences for the creation of the Quark Gluon Plasma (QGP) at RHIC\u00a0[@RHIC1; @RHIC2] and LHC\u00a0[@LHC1; @LHC2; @LHC4; @LHC5] is jet quenching: the modification of jets due to the interaction with the dense QCD medium created in high-energy collisions of heavy atomic nuclei. The most direct observable consequence of this effect is the suppression of the yields of particles and jets at large transverse momentum \u2014 the quenching. However, [*jet quenching*]{} is nowadays a generic name that embraces the modern technology of jet studies, originally developed for jets in vacuum (i.e. in proton-proton or simpler colliding systems), including a plethora of global or sub-jet observables with different degrees of sophistication. These new observables pose a challenge on present theoretical descriptions of in-medium jet cascades that are stimulating advances towards a more precise implementation of the underlying physics.\n\nJets in heavy-ion collisions develop partly inside the surrounding QCD matter and partly outside of it, with quantum interference between the two possibilities. Moreover, the total shower" +"---\nabstract: 'We identify the [*phase of a cycle*]{} as a new critical factor for tipping points (critical transitions) in cyclic systems subject to time-varying external conditions. As an example, we consider how contemporary climate variability induces tipping from a predator-prey cycle to extinction in two paradigmatic predator-prey models with an Allee effect. Our analysis of these examples uncovers a counter-intuitive behaviour, which we call [ phase tipping]{} or [*P-tipping*]{}, where tipping to extinction occurs only from certain phases of the cycle. To explain this behaviour, we combine global dynamics with set theory and introduce the concept of [*partial basin instability*]{} for [ attracting]{} limit cycles. This concept provides a general framework to analyse and identify [ easily testable]{} criteria for the occurrence of [ phase tipping]{} in externally forced systems, [ and can be extended to more complicated attractors.]{}'\nauthor:\n- 'Hassan Alkhayuon[^1], Rebecca C. Tyson[^2], and Sebastian Wieczorek$^*$'\nbibliography:\n- 'biblio.bib'\ndate: June 2021\ntitle: ' Phase tipping: How cyclic ecosystems respond to contemporary climate '\n---\n\nIntroduction\n============\n\nTipping points or critical transitions are fascinating nonlinear phenomena that are known to occur in complex systems subject to changing external conditions or external inputs. They are ubiquitous in" +"---\nabstract: 'Space weather phenomena such as solar flares, have massive destructive power when reaches certain amount of magnitude. Such high magnitude solar flare event can interfere space-earth radio communications and neutralize space-earth electronics equipment. In the current study, we explorer the deep learning approach to build a solar flare forecasting model and examine its limitations along with the ability of features extraction, based on the available time-series data. For that purpose, we present a multi-layer 1D Convolutional Neural Network (CNN) to forecast solar flare events probability occurrence of M and X classes at 1,3,6,12,24,48,72,96 hours time frame. In order to train and evaluate the performance of the model, we utilised the available Geostationary Operational Environmental Satellite (GOES) X-ray time series data, ranged between July 1998 and January 2019, covering almost entirely the solar cycles 23 and 24. The forecasting model were trained and evaluated in two different scenarios (1) random selection and (2) chronological selection, which were compare afterward. Moreover we compare our results to those considered as state-of-the-art flare forecasting models, both with similar approaches and different ones.The majority of the results indicates that (1) chronological selection obtain a degradation factor of 3% versus the random selection for" +"---\nabstract: 'Accelerating computational tasks with quantum resources is a widely-pursued goal that is presently limited by the challenges associated with high-fidelity control of many-body quantum systems. The paradigm of reservoir computing presents an attractive alternative, especially in the noisy intermediate-scale quantum era, since control over the internal system state and knowledge of its dynamics are not required. Instead, complex, unsupervised internal trajectories through a large state space are leveraged as a computational resource. Quantum systems offer a unique venue for reservoir computing, given the presence of interactions unavailable in analogous classical systems, and the potential for a computational space that grows exponentially with physical system size. Here, we consider a reservoir comprised of a single qudit ($d$-dimensional quantum system). We demonstrate a robust performance advantage compared to an analogous classical system accompanied by a clear improvement with Hilbert space dimension for two benchmark tasks: signal processing and short-term memory capacity. Qudit reservoirs are directly realized by current-era quantum hardware, offering immediate practical implementation, and a promising outlook for increased performance in larger systems.'\nauthor:\n- 'W.\u00a0D.\u00a0Kalfus'\n- 'G.\u00a0J.\u00a0Ribeill'\n- 'G.\u00a0E.\u00a0Rowlands'\n- 'H.\u00a0K.\u00a0Krovi'\n- 'T.\u00a0A.\u00a0Ohki'\n- 'L.\u00a0C.\u00a0G.\u00a0Govia'" +"---\nauthor:\n- 'R.D.P. Mano,'\n- 'C.A.O. Henriques,'\n- 'F.D. Amaro'\n- 'and C.M.B. Monteiro[!!]{}'\ntitle: Electroluminescence yield in pure krypton\n---\n\nIntroduction {#sec:intro}\n============\n\nThe electroluminescence yield of gaseous xenon and argon has been studied in detail, both experimentally (e.g. see [@1; @2; @3; @4; @5; @6; @7] and references therein) and through simulation tools [@8; @9; @10; @11; @12]. At present, the main drive for those studies is the ongoing development of dual-phase [@13; @14; @15; @16; @17; @18; @19] and high-pressure gaseous [@20; @21; @22; @23] optical Time Projection Chambers (TPC), which make use of the secondary scintillation, - electroluminescence (EL) - processes in the gas for the amplification of the primary ionisation signals produced by radiation interaction inside the TPC active volume. The R&D of such TPCs aims at application to Dark Matter search [@13; @14; @15; @16; @17] and to neutrino physics, such as neutrino oscillation [@18; @19], double beta decay [@20; @21; @22] and double electron capture [@24] detection. The physics behind these rare event detection experiments is of paramount importance in contemporary particle physics, nuclear physics and cosmology, justifying the enormous R&D efforts carried out by the scientific community.\n\nThe radioactivity of $^{85}$Kr" +"---\nabstract: |\n We study a novel variant of online finite-horizon Markov Decision Processes with adversarially changing loss functions and initially unknown dynamics. In each episode, the learner suffers the loss accumulated along the trajectory realized by the policy chosen for the episode, and observes *aggregate bandit feedback*: the trajectory is revealed along with the cumulative loss suffered, rather than the individual losses encountered along the trajectory. Our main result is a computationally efficient algorithm with $O(\\sqrt{K})$ regret for this setting, where $K$ is the number of episodes.\n\n We establish this result via an efficient reduction to a novel bandit learning setting we call [Distorted Linear Bandits]{}([DLB]{}), which is a variant of bandit linear optimization where actions chosen by the learner are adversarially distorted before they are committed. We then develop a computationally-efficient online algorithm for [DLB]{}for which we prove an $O(\\sqrt{T})$ regret bound, where $T$ is the number of time steps. Our algorithm is based on online mirror descent with a self-concordant barrier regularization that employs a novel increasing learning rate schedule.\nauthor:\n- 'Alon Cohen [^1]'\n- 'Haim Kaplan [^2]'\n- 'Tomer Koren [^3]'\n- 'Yishay Mansour [^4]'\nbibliography:\n- 'fbmdps.bib'\ntitle: Online Markov Decision Processes with Aggregate" +"---\nabstract: 'Besides mimicking bio-chemical and multi-scale communication mechanisms, molecular communication forms a theoretical framework for virus infection processes. Towards this goal, aerosol and droplet transmission has recently been modeled as a multiuser scenario. In this letter, the \u201cinfection performance\u201d is evaluated by means of a mutual information analysis, and by an even simpler probabilistic performance measure which is closely related to absorbed viruses. The so-called infection rate depends on the distribution of the channel input events as well as on the transition probabilities between channel input and output events. The infection rate is investigated analytically for five basic discrete memoryless channel models. Numerical results for the transition probabilities are obtained by Monte Carlo simulations for pathogen-laden particle transmission in four typical indoor environments: two-person office, corridor, classroom, and bus. Particle transfer contributed significantly to infectious diseases like SARS-CoV-2 and influenza.'\nauthor:\n- 'Peter\u00a0Adam\u00a0Hoeher, Martin Damrath, Sunasheer Bhattacharjee, and Max Schurwanz[^1]'\nbibliography:\n- 'main.bib'\ntitle: On Mutual Information Analysis of Infectious Disease Transmission via Particle Propagation\n---\n\nAerosols, computer simulation, molecular communication, multiuser channels, mutual information.\n\nIntroduction\n============\n\nby Claude E.\u00a0Shannon\u2019s fundamental model of a noisy transmission system [@Shannon1948 Fig.\u00a01], viral aerosol information retrieval in communication" +"---\nabstract: 'OpenMatch is a Python-based library that serves for Neural Information Retrieval (Neu-IR) research. It provides self-contained neural and traditional IR modules, making it easy to build customized and higher-capacity IR systems. In order to develop the advantages of Neu-IR models for users, OpenMatch provides implementations of recent neural IR models, complicated experiment instructions, and advanced few-shot training methods. OpenMatch reproduces corresponding ranking results of previous work on widely-used IR benchmarks, liberating users from surplus labor in baseline reimplementation. Our OpenMatch-based solutions conduct top-ranked empirical results on various ranking tasks, such as ad hoc retrieval and conversational retrieval, illustrating the convenience of OpenMatch to facilitate building an effective IR system. The library, experimental methodologies and results of OpenMatch are all publicly available at .'\nauthor:\n- 'Zhenghao Liu$^{\\heartsuit*}$, Kaitao Zhang$^{\\heartsuit*}$, Chenyan Xiong$^{^\\spadesuit}$, Zhiyuan Liu$^\\heartsuit$ and Maosong Sun$^\\heartsuit$'\nbibliography:\n- 'citation.bib'\ntitle: 'OpenMatch: An Open Source Library for Neu-IR Research'\n---\n\n[^1]\n\nIntroduction\n============\n\nWith the rapid development of deep neural networks, Information Retrieval (IR) shows better performance and benefits lots of applications, such as open-domain question answering\u00a0[@chen2017reading] and fact verification\u00a0[@thorne2018fact]. Being neural has become a new tendency for the IR community, which helps to overcome the vocabulary" +"---\nabstract: 'Virtual meetings are critical for remote work because of the need for synchronous collaboration in the absence of in-person interactions. In-meeting multitasking is closely linked to people\u2019s productivity and wellbeing. However, we currently have limited understanding of multitasking in remote meetings and its potential impact. In this paper, we present what we believe is the most comprehensive study of remote meeting multitasking behavior through an analysis of a large-scale telemetry dataset collected from February to May 2020 of U.S. Microsoft employees and a 715-person diary study. Our results demonstrate that intrinsic meeting characteristics such as size, length, time, and type, significantly correlate with the extent to which people multitask, and multitasking can lead to both positive and negative outcomes. Our findings suggest important best-practice guidelines for remote meetings (e.g., avoid important meetings in the morning) and design implications for productivity tools (e.g., support positive remote multitasking).'\nauthor:\n- Hancheng Cao\n- 'Chia-Jung Lee'\n- Shamsi Iqbal\n- Mary Czerwinski\n- Priscilla Wong\n- Sean Rintel\n- Brent Hecht\n- Jaime Teevan\n- Longqi Yang\nbibliography:\n- 'other\\_ref.bib'\n- 'ref.bib'\ntitle: |\n Large Scale Analysis of Multitasking Behavior\\\n During Remote Meetings\n---\n\n<ccs2012> <concept> <concept\\_id>10010520.10010553.10010562</concept\\_id> <concept\\_desc>Computer systems organization\u00a0Embedded" +"---\nabstract: 'We present [gleam]{}(**G**alaxy **L**ine **E**mission & **A**bsorption **M**odeling), a Python tool for fitting Gaussian models to emission and absorption lines in large samples of 1D extragalactic spectra. [gleam]{}is tailored to work well in batch mode without much human interaction. With [gleam]{}, users can uniformly process a variety of spectra, including galaxies and active galactic nuclei, in a wide range of instrument setups and signal-to-noise regimes. [gleam]{}also takes advantage of multiprocessing capabilities to process spectra in parallel. With the goal of enabling reproducible workflows for its users, [gleam]{}employs a small number of input files, including a central, user-friendly configuration in which fitting constraints can be defined for groups of spectra and overrides can be specified for edge cases. For each spectrum, [gleam]{}produces a table containing measurements and error bars for the detected spectral lines and continuum, and upper limits for non-detections. For visual inspection and publishing, [gleam]{}can also produce plots of the data with fitted lines overlaid. In the present paper, we describe [gleam]{}\u2019s main features, the necessary inputs, expected outputs, and some example applications, including thorough tests on a large sample of optical/infra-red multi-object spectroscopic observations and integral field spectroscopic" +"---\nabstract: 'Computer vision is playing an increasingly important role in automated malware detection with the rise of the image-based binary representation. These binary images are fast to generate, require no feature engineering, and are resilient to popular obfuscation methods. Significant research has been conducted in this area, however, it has been restricted to small-scale or private datasets that only a few industry labs and research teams have access to. This lack of availability hinders examination of existing work, development of new research, and dissemination of ideas. We release [MalNet-Image]{}, the largest public cybersecurity image database, offering 24$\\times$ more images and 70$\\times$ more classes than existing databases (available at []{}). [MalNet-Image]{}contains over 1.2 million malware images\u2014across 47 types and 696 families\u2014democratizing image-based malware capabilities by enabling researchers and practitioners to evaluate techniques that were previously reported in propriety settings. We report the first million-scale malware detection results on binary images. [MalNet-Image]{}unlocks new and unique opportunities to advance the frontiers of machine learning, enabling new research directions into vision-based cyber defenses, multi-class imbalanced classification, and interpretable security.'\nauthor:\n- 'Scott Freitas$^*$'\n- Rahul Duggal\n- Duen Horng Chau\nbibliography:\n- 'main.bib'\ntitle: 'MalNet: A Large-Scale Image Database of" +"---\nabstract: 'We extend and generalize the construction of Sturm-Liouville problems for a family of Hamiltonians constrained to fulfill a third-order shape-invariance condition and focusing on the \u201c$-2x/3$\u201d hierarchy of solutions to the fourth Painlev\u00e9 transcendent. Such a construction has been previously addressed in the literature for some particular cases but we realize it here in the most general case. The corresponding potential in the Hamiltonian operator is a rationally extended oscillator defined in terms of the conventional Okamoto polynomials, from which we identify three different zero-modes constructed in terms of the generalized Okamoto polynomials. The third-order ladder operators of the system reveal that the complete set of eigenfunctions is decomposed as a union of three disjoint sequences of solutions, generated from a set of three-term recurrence relations. We also identify a link between the eigenfunctions of the Hamiltonian operator and a special family of exceptional Hermite polynomial.'\nauthor:\n- 'V. Hussin[^1]'\n- 'I. Marquette[^2]'\n- 'K. Zelaya[^3]'\ntitle: 'Third-order ladder operators, generalized Okamoto and exceptional orthogonal polynomials'\n---\n\nIntroduction\n============\n\nNonlinear equations have played a fundamental role in understanding the dynamics of some physical models, even in cases where the governing physical laws are defined in terms of linear" +"---\nabstract: 'It is an open challenge to estimate systematically the physical parameters of neutron star interiors from pulsar timing data while separating spin wandering intrinsic to the pulsar (achromatic timing noise) from measurement noise and chromatic timing noise (due to propagation effects). In this paper we formulate the classic two-component, crust-superfluid model of neutron star interiors as a noise-driven, linear dynamical system and use a state-space-based expectation-maximization method to estimate the system parameters using gravitational-wave and electromagnetic timing data. Monte Carlo simulations show that we can accurately estimate all six parameters of the two-component model provided that electromagnetic measurements of the crust angular velocity, and gravitational-wave measurements of the core angular velocity, are both available. When only electromagnetic data are available we can recover the overall relaxation time-scale, the ensemble-averaged spin-down rate, and the strength of the white-noise torque on the crust. However, the estimates of the secular torques on the two components and white noise torque on the superfluid are biased significantly.'\nauthor:\n- |\n Patrick M. Meyers,$^{1,2}$[^1], Andrew Melatos$^{1,2}$, Nicholas J. O\u2019Neill$^{1}$\\\n $^{1}$School of Physics, University of Melbourne, Parkville, VIC 3010, Australia\\\n $^{2}$OzGrav, University of Melbourne, Parkville, VIC 3010, Australia\nbibliography:\n- 'references\\_ads.bib'\n- 'references\\_non\\_ads.bib'\ndate: 'Accepted" +"---\nabstract: 'The balanced homodyne detection as a readout scheme of gravitational-wave detectors is carefully examined, which specifies the directly measured quantum operator in the detection. This specification is necessary to apply the quantum measurement theory to gravitational-wave detections. We clarify the contribution of vacuum fluctuations to the noise spectral density without using the two-photon formulation. We found that the noise spectral density in the two-photon formulation includes vacuum fluctuations from the main interferometer but does not includes those from the local oscillator which depends on the directly measured operators.'\naddress: ' Gravitational-Wave Science Project, National Astronomical Observatory, Mitaka, Tokyo 181-8588, Japan '\nauthor:\n- Kouji Nakamura\ntitle: |\n Vacuum fluctuations and balanced homodyne detection\\\n through ideal multi-mode photon number or power counting detectors \n---\n\nquantum measurement theory ,vacuum fluctuations ,balanced homodyne detection ,gravitational-wave detectors\n\nIntroduction {#sec:Introduction}\n============\n\nOne of the motivations of the recent quantum measurement theory\u00a0[@Ozawa-2004] is gravitational-wave detection. However, the actual application of this theory to the gravitational-wave detection requires its extension to the quantum field theory. Furthermore, in quantum measurement theory, we have to specify the directly measured quantum operator. In interferometric gravitational-wave detectors, we may regard that the directly measured operator is specified at" +"---\nabstract: |\n We propose \u201c*aquanims*\u201d as new design metaphors for animated transitions that preserve displayed areas during the transformation. Animated transitions are used to facilitate understanding of graphical transformations between different visualizations. Area is a key information to preserve during filtering or ordering transitions of area-based charts like bar charts, histograms, tree maps or mosaic plots. As liquids are incompressible fluids, we use a hydraulic metaphor to convey the sense of area preservation during animated transitions: in *aquanims*, graphical objects can change shape, position, color and even connectedness but not displayed area, as for a liquid contained in a transparent vessel or transferred between such vessels communicating through hidden pipes. We present various *aquanims* for product plots like bar charts and histograms to accomodate changes in data, in ordering of bars or in number of bins, and to provide animated tips. We also consider confusion matrices visualized as fluctuation diagrams and mosaic plots, and show how *aquanims* can be used to ease the understanding of different classification errors of real data.\\\nauthor:\n- |\n Michael Aupetit [^1]\\\n Qatar Computing Research Institute, HBKU, Doha, Qatar\nbibliography:\n- 'egbibsample.bib'\n---\n\nIntroduction\n============\n\nVisualization can be used to discover new information or" +"---\nauthor:\n- 'David O\u2019Callaghan'\n- Patrick Mannion\nbibliography:\n- 'sample.bib'\ntitle: |\n Exploring the Impact of Tunable Agents\\\n in Sequential Social Dilemmas\n---\n\nIntroduction\n============\n\nThe standard approach to developing a agent is to learn some fixed behaviour that will allow the agent to solve a sequential decision making problem. If, however, the developer wants the agent to behave differently, the agent normally has to be partially or completely retrained. To address this shortcoming, @kallstrom2019tunable introduced a framework to train agents whose behaviour can be tuned during run-time using methods. In this framework, each set of objective preferences (scalarisation weights) corresponds to different combinations of desired agent behaviours, and the agent is trained with different weight vectors to learn different behaviours simultaneously. After the agent is trained, the weights can be adjusted on the fly to dynamically change the agent\u2019s behaviour, without the need for retraining. In this study we build on this framework, extending it to more complex environments with larger state-spaces and multiple learning agents.\n\nIn particular, we are interested in studying the suitability of the tunable agents framework to learn adaptive agent behaviours in [@leibo2017multi], settings where there is an inherent conflict between individual and collective" +"---\nauthor:\n- 'F.Boschini'\n- 'M.Minola'\n- 'R.Sutarto'\n- 'E.Schierle'\n- 'M.Bluschke'\n- 'S.Das'\n- 'Y.Yang'\n- 'M.Michiardi'\n- 'Y.C.Shao'\n- 'X.Feng'\n- 'S.Ono'\n- 'R.D.Zhong'\n- 'J.A.Schneeloch'\n- 'G.D.Gu'\n- 'E.Weschke'\n- 'F.He'\n- 'Y.D.Chuang'\n- 'B.Keimer'\n- 'A.Damascelli'\n- 'A.Frano'\n- 'E.H.da Silva Neto'\ntitle: Dynamic electron correlations with charge order wavelength along all directions in the copper oxide plane\n---\n\n**Abstract**\n\n[In strongly correlated systems the strength of Coulomb interactions between electrons, relative to their kinetic energy, plays a central role in determining their emergent quantum mechanical phases.]{} We perform resonant x-ray scattering on [Bi$_2$Sr$_2$CaCu$_2$O$_{8+\\delta}$]{}, a prototypical cuprate superconductor, to probe electronic correlations within the CuO$_2$ plane. We discover a dynamic quasi-circular pattern in the $x$-$y$ scattering plane with a radius that matches the wave vector magnitude of the well-known static charge order. Along with doping- and temperature-dependent measurements, our experiments reveal a picture of charge order competing with superconductivity where short-range domains along $x$ and $y$ can dynamically rotate into any other in-plane direction. This quasi-circular spectrum, a hallmark of Brazovskii-type fluctuations, has immediate consequences to our understanding of rotational and translational symmetry breaking in the cuprates. We discuss how the combination of short- and long-range" +"---\nabstract: 'We study the propagation of radiative heat (Marshak) waves, using modified $P_1$-approximation equations. In relatively optically-thin media the heat propagation is supersonic,\u00a0i.e. hydrodynamic motion is negligible, and thus can be described by the radiative transfer Boltzmann equation, coupled with the material energy equation. However, the exact thermal radiative transfer problem is still difficult to solve and requires massive simulation capabilities. Hence, there still exists a need for adequate approximations that are comparatively easy to carry out. Classic approximations, such as the classic diffusion and classic $P_1$, fail to describe the correct heat wave velocity, when the optical depth is not sufficiently high. Therefore, we use the recently developed discontinuous asymptotic $P_1$ approximation, which is a time-dependent analogy for the adjustment of the discontinuous asymptotic diffusion for two different zones. This approximation was tested via several benchmarks, showing better results than other common approximations, and has also demonstrated a good agreement with a main Marshak wave experiment and its Monte-Carlo gray simulation. Here we derive energy expansion of the discontinuous asymptotic $P_1$ approximation in slab geometry, and test it with numerous experimental results for propagating Marshak waves inside low density foams. The new approximation describes the heat wave" +"---\nabstract: 'Low-Power Wide-Area Network (LPWAN) is an enabling Internet-of-Things (IoT) technology that supports long-range, low-power, and low-cost connectivity to numerous devices. To avoid the crowd in the limited ISM band (where most LPWANs operate) and cost of licensed band, the recently proposed SNOW (Sensor Network over White Spaces) is a promising LPWAN platform that operates over the TV white spaces. As it is a very recent technology and is still in its infancy, the current SNOW implementation uses the USRP devices as LPWAN nodes, which has high costs ($\\approx$ \\$750 USD per device) and large form-factors, hindering its applicability in practical deployment. In this paper, we implement SNOW using low-cost, low form-factor, low-power, and widely available commercial off-the-shelf (COTS) devices to enable its practical and large-scale deployment. Our choice of the COTS device (TI CC13x0: CC1310 or CC1350) consequently brings down the cost and form-factor of a SNOW node by 25x and 10x, respectively. Such implementation of SNOW on the CC13x0 devices, however, faces a number of challenges to enable link reliability and communication range. Our implementation addresses these challenges by handling peak-to-average power ratio problem, channel state information estimation, carrier frequency offset estimation, and near-far power problem. Our" +"---\nabstract: 'This paper considers joint device activity detection and channel estimation in Internet of Things (IoT) networks, where a large number of IoT devices exist but merely a random subset of them become active for short-packet transmission at each time slot. In particular, to improve the detection performance, we propose to leverage the *temporal correlation* in user activity, i.e., a device active at the previous time slot is more likely to be still active at the current time slot. Despite the appealing temporal correlation feature, it is challenging to unveil the connection between the estimated activity pattern for the previous time slot (which may be imperfect) and the true activity pattern at the current time slot due to the unknown estimation error. In this paper, we manage to tackle this challenge under the framework of approximate message passing (AMP). Specifically, thanks to the state evolution, the correlation between the activity pattern estimated by AMP at the previous time slot and the real activity pattern at the previous and current time slot is quantified explicitly. Based on the well-defined temporal correlation, we further manage to embed this useful SI into the design of the minimum mean-squared error (MMSE) denoisers and" +"---\nauthor:\n- 'Mohammad Akhond,'\n- 'Federico Carta,'\n- 'Siddharth Dwivedi,'\n- 'Hirotaka Hayashi,'\n- 'Sung-Soo Kim,'\n- and Futoshi Yagi\nbibliography:\n- 'ref.bib'\ntitle: 'Factorised 3d $\\mathcal{N}=4$ orthosymplectic quivers'\n---\n\n[preprint[CTP-SCU/2021017]{}]{}\n\nIntroduction and summary of results {#sec:intro}\n===================================\n\nGauge theories in three spacetime dimensions are strongly coupled in the IR, determining their low energy dynamics is therefore generically difficult. One arena in which one can overcome this difficulty is the realm of 3d $\\mathcal{N}=4$ gauge theories. Their relevance to string theory was highlighted very early after the D-brane revolution in a landmark paper by Hanany and Witten [@Hanany:1996ie], which facilitated further explorations of the subject. A more recent development is to use 3d $\\mathcal{N}=4$ theories as a probe to study higher dimensional superconformal field theories (SCFTs) as well as gauge theories [@Akhond:2020vhc; @Bourget:2019rtl; @Bourget:2020asf; @Bourget:2020gzi; @Bourget:2020xdz; @Closset:2020scj; @vanBeest:2020kou; @vanBeest:2020civ; @Bourget:2020mez; @Cabrera:2018jxt; @Cabrera:2019izd; @Eckhard:2020jyr; @Closset:2020afy]. The idea is to relate the Higgs branch of these higher dimensional theories to the Coulomb branch of the 3d theory, the latter of which is dubbed magnetic quiver (MQ). In addition to their significance to string theory or higher dimensional theories, 3d $\\mathcal{N}=4$ theories possess rich dynamics, making them interesting objects in their own" +"---\nabstract: '**Abstract.** Exponential Random Graph Models (ERGMs) have gained increasing popularity over the years. Rooted into statistical physics, the ERGMs framework has been successfully employed for reconstructing networks, detecting statistically significant patterns in graphs, counting networked configurations with given properties. From a technical point of view, the ERGMs workflow is defined by two subsequent optimization steps: the first one concerns the maximization of Shannon entropy and leads to identify the functional form of the ensemble probability distribution that is maximally non-committal with respect to the missing information; the second one concerns the maximization of the likelihood function induced by this probability distribution and leads to its numerical determination. This second step translates into the resolution of a system of $O(N)$ non-linear, coupled equations (with $N$ being the total number of nodes of the network under analysis), a problem that is affected by three main issues, i.e. *accuracy*, *speed* and *scalability*. The present paper aims at addressing these problems by comparing the performance of three algorithms (i.e. Newton\u2019s method, a quasi-Newton method and a recently-proposed fixed-point recipe) in solving several ERGMs, defined by binary and weighted constraints in both a directed and an undirected fashion. While Newton\u2019s method performs best" +"---\nabstract: 'Fluid-structure interactions are central to many bio-molecular processes, and they impose a great challenge for computational and modeling methods. In this paper, we consider the immersed boundary method (IBM) for biofluid systems, and to alleviate the computational cost, we apply reduced-order techniques to eliminate the degrees of freedom associated with the large number of fluid variables. We show how reduced models can be derived using Petrov-Galerkin projection and subspaces that maintain the incompressibility condition. More importantly, the reduced-order model is shown to preserve the Lyapunov stability. We also address the practical issue of computing coefficient matrices in the reduced-order model using an interpolation technique. The efficiency and robustness of the proposed formulation are examined with test examples from various applications.'\nauthor:\n- |\n Yushuang Luo\\\n Department of Mathematics\\\n The Pennsylvania State University, University Park, PA 16802, USA\\\n `yzl55@psu.edu`\\\n Xiantao Li\\\n Department of Mathematics,\\\n The Pennsylvania State University, University Park, PA 16802, USA\\\n `xxl12@psu.edu`\\\n Wenrui Hao\\\n Department of Mathematics\\\n The Pennsylvania State University, University Park, PA 16802, USA\\\n `wxh64@psu.edu`\\\nbibliography:\n- 'reduced-order.bib'\ntitle: Projection based model reduction for the immersed boundary method\n---\n\nIntroduction {#sec:intro}\n============\n\nBiofluid dynamics, the study of cellular movement in biological fluid flow, is essential" +"---\nabstract: 'In contextual anomaly detection, an object is only considered anomalous within a specific context. Most existing methods use a single context based on a set of user-specified contextual features. However, identifying the right context can be very challenging in practice, especially in datasets with a large number of attributes. Furthermore, in real-world systems, there might be multiple anomalies that occur in different contexts and, therefore, require a combination of several \u201cuseful\u201d contexts to unveil them. In this work, we propose a novel approach, called WisCon (Wisdom of the Contexts), to effectively detect complex contextual anomalies in situations where the true contextual and behavioral attributes are unknown. Our method constructs an ensemble of multiple contexts, with varying importance scores, based on the assumption that not all useful contexts are equally so. We estimate the importance of each context using an active learning approach with a novel query strategy. Experiments show that WisCon significantly outperforms existing baselines in different categories (i.e., active learning methods, unsupervised contextual and non-contextual anomaly detectors) on 18 datasets. Furthermore, the results support our initial hypothesis that there is no single perfect context that successfully uncovers all kinds of contextual anomalies, and leveraging the \u201cwisdom\u201d of" +"---\nabstract: 'In this study we investigate the nuclear quantum effects (NQEs)\u00a0on the acidity constant ([p$K_A$]{}) of liquid water isotopologues at the ambient condition by path integral molecular dynamics (PIMD) simulations. We compared simulations using a fully explicit solvent model with a classical polarizable force field, density functional tight binding, and ab initio density functional theory, which correspond to empirical, semiempirical, and ab initio PIMD simulations, respectively. The centroid variable with respect to the proton coordination number of a water molecule was restrained to compute the gradient of the free energy, which measures the reversible work of the proton abstraction for the quantum mechanical system. The free energy curve obtained by thermodynamic integration was used to compute the [p$K_A$]{}\u00a0value based on probabilistic determination. This technique not only reproduces the [p$K_A$]{}\u00a0value of liquid D$_2$O experimentally measured (14.86) but also allows for a theoretical prediction of the [p$K_A$]{}\u00a0values of liquid T$_2$O, aqueous HDO and HTO which are unknown due to its scarcity. It is also shown that the NQEs\u00a0on the free energy curve can result in a downshift of $4.5\\pm 0.9$\u00a0[p$K_A$]{}\u00a0units in the case of liquid water, which indicates that the NQEs\u00a0plays an indispensable" +"---\nabstract: |\n A subset of the integer planar grid $[N] \\times [N]$ is called *corner-free* if it contains no triple of the form $(x,y), (x+\\delta,y), (x,y+\\delta)$. It is known that such a set has a vanishingly small density, but how large this density can be remains unknown. The only previous construction, and its variants, were based on Behrend\u2019s large subset of $[N]$ with no $3$-term arithmetic progression. Here we provide the first construction of a corner-free set that does not rely on a large set of integers with no arithmetic progressions. Our approach to the problem is based on the theory of communication complexity.\\\n In the $3$-players exactly-$N$ problem the players need to decide whether $x+y+z=N$ for inputs $x,y,z$ and fixed $N$. This is the first problem considered in the multiplayer Number On the Forehead (NOF) model. Despite the basic nature of this problem, no progress has been made on it throughout the years. Only recently have explicit protocols been found for the first time, yet no improvement in complexity has been achieved to date. The present paper offers the first improved protocol for the exactly-$N$ problem.\nauthor:\n- 'Nati Linial[^1]'\n- Adi Shraibman\ntitle: 'Larger Corner-Free Sets from" +"---\nabstract: 'Magnetic braking (MB) likely plays a vital role in the evolution of low-mass X-ray binaries (LMXBs). However, it is still uncertain about the physics of MB, and there are various proposed scenarios for MB in the literature. To examine and discriminate the efficiency of MB, we investigate the LMXB evolution with five proposed MB laws. Combining detailed binary evolution calculation with binary population synthesis, we obtain the expected properties of LMXBs and their descendants binary millisecond pulsars. We then discuss the strength and weakness of each MB law by comparing the calculated results with observations. We conclude that the $\\tau$-boosted MB law seems to best match the observational characteristics.'\nauthor:\n- 'Zhu-Ling Deng$^{1,2,3,4,5}$, Xiang-Dong Li$^{4,5*}$, Zhi-Fu Gao$^{1,2*}$, Yong Shao$^{4,5}$'\ntitle: Evolution of LMXBs under Different Magnetic Braking Prescriptions \n---\n\nINTRODUCTION\n============\n\nLow-mass X-ray binaries (LMXBs) contain an accreting compact star (a black hole or a neutron star) and a low-mass donor. Mass transfer (MT) in LMXBs proceeds via Roche-lobe overflow (RLOF). There are about 200 LMXBs discovered in the Galaxy [@Liu07], and their formation remains to be a controversial topic [@T06; @Li15 for reviews]. Here, we focus on the evolution of LMXBs with a neutron star (NS). For" +"---\nabstract: 'Couplings play a central role in the analysis of Markov chain convergence and in the construction of novel Markov chain Monte Carlo estimators, diagnostics, and variance reduction techniques. The set of possible couplings is often intractable, frustrating the search for tight bounds and efficient estimators. To address this challenge for algorithms in the Metropolis\u2013Hastings (MH) family, we establish a simple characterization of the set of MH transition kernel couplings. We then extend this result to describe the set of maximal couplings of the MH kernel, resolving an open question of @OLeary2020. Our results represent an advance in understanding the MH transition kernel and a step forward for coupling this popular class of algorithms.'\naddress:\n- 'Tudor Investment Corporation, '\n- 'Department of Statistics, Rutgers University, '\nauthor:\n- \n- \nbibliography:\n- 'refs.bib'\ntitle: 'Metropolis\u2013Hastings transition kernel couplings'\n---\n\nand\n\nIntroduction {#sec:intro}\n============\n\nCouplings have played an important role in the analysis of Markov chain convergence since the early days of the field\u00a0[@doeblin1938expose; @harris1955chains]. Beyond their role as a proof technique, couplings have also been used as a basis for sampling\u00a0[@propp:wilson:1996; @fill1997interruptible; @neal1999circularly; @flegal2012exact], convergence diagnosis [@johnson1996studying; @johnson1998coupling; @biswas2019estimating], variance reduction [@neal2001improving; @Goodman2009; @piponi2020hamiltonian], and unbiased estimation" +"---\nauthor:\n- 'T. Masseron'\n- 'Y. Osorio'\n- 'D.A.Garc\u00eda-Hern\u00e1ndez'\n- 'C. Allende Prieto'\n- 'O. Zamora'\n- 'Sz. M\u00e9sz\u00e1ros'\nbibliography:\n- 'NLTEAPOGEE.bib'\ndate: 'Received ; accepted'\ntitle: Probing 3D and NLTE models using APOGEE observations of globular cluster stars\n---\n\n[Hydrodynamical (or 3D) and non-local thermodynamic equilibrium (NLTE) effects are known to affect abundance analyses. However, there are very few observational abundance tests of 3D and NLTE models.]{} [We developed a new way of testing the abundance predictions of 3D and NLTE models, taking advantage of large spectroscopic survey data.]{} [We use a line-by-line analysis of the Apache Point Observatory Galactic Evolution Experiment (APOGEE) spectra (H band) with the Brussels Automatic Code for Characterizing High accUracy Spectra (BACCHUS). We compute line-by-line abundances of Mg, Si, Ca, and Fe for a large number of globular cluster K giants in the APOGEE survey. We compare this line-by-line analysis against NLTE and 3D predictions.]{} [While the 1D\u2013NLTE models provide corrections in the right direction, there are quantitative discrepancies between different models. We observe a better agreement with the data for the models including reliable collisional cross-sections. The agreement between data and models is not always satisfactory when the 3D spectra are computed" +"---\nauthor:\n- Quanhao Zhang\n- Rui Liu\n- Yuming Wang\n- Zhenjun Zhou\n- Bin Zhuang\n- Xiaolei Li\ntitle: 'How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration?'\n---\n\nIntroduction {#sec:introduction}\n============\n\nLarge-scale solar eruptions include prominence/filament eruptions, flares, and coronal mass ejections (CMEs) [@Benz2008a; @Chen2011a; @Parenti2014a; @Liu2020]. They are capable of inflicting huge impacts on the solar-terrestrial system [@svestka2001a; @Cheng2014; @Shen2014; @Lugaz2017; @Gopalswamy2018a]. It is widely accepted that different kinds of large-scale solar eruptions are close related to each other: they are essentially different manifestations of the same eruptive process of a coronal magnetic flux rope system [@Zhang2001; @Vrvsnak2005a; @vanDriel2015a; @Jiang2018; @Liu2018a; @Yan2020]. Therefore, it is of great significance to investigate how the eruption of coronal magnetic flux ropes is initiated. According to the magnetic topology, coronal flux ropes are classified into two types of configurations: if the flux rope sticks to the photosphere, with a bald patch separatrix surface [BPSS, @Titov1993a; @Titov1999a; @Gibson2006a] wrapping the flux rope, this is usually called the BPS configuration [@Filippov2013]; for the flux rope system in which the rope is suspended in the corona and wrapped around by a hyperbolic flux tube (HFT), it" +"---\nabstract: 'We consider a one-dimensional run-and-tumble particle, or persistent random walk, in the presence of an absorbing boundary located at the origin. After each tumbling event, which occurs at a constant rate $\\gamma$, the (new) velocity of the particle is drawn randomly from a distribution $W(v)$. We study the survival probability $S(x,t)$ of a particle starting from $x \\geq 0$ up to time $t$ and obtain an explicit expression for its double Laplace transform (with respect to both $x$ and $t$) for an [*arbitrary*]{} velocity distribution $W(v)$, not necessarily symmetric. This result is obtained as a consequence of Spitzer\u2019s formula, which is well known in the theory of random walks and can be viewed as a generalization of the Sparre Andersen theorem. We then apply this general result to the specific case of a two-state particle with velocity $\\pm v_0$, the so-called persistent random walk (PRW), and in the presence of a constant drift $\\mu$ and obtain an explicit expression for $S(x,t)$, for which we present more detailed results. Depending on the drift $\\mu$, we find a rich variety of behaviours for $S(x,t)$, leading to three distinct cases: (i) [*subcritical*]{} drift $-v_0\\!<\\!\\mu\\!<\\! v_0$, (ii) [*supercritical*]{} drift $\\mu < -v_0$" +"---\nabstract: 'Practical sequence classification tasks in natural language processing often suffer from low training data availability for target classes. Recent works towards mitigating this problem have focused on transfer learning using embeddings pre-trained on often unrelated tasks, for instance, language modeling. We adopt an alternative approach by transfer learning on an ensemble of related tasks using prototypical networks under the meta-learning paradigm. Using intent classification as a case study, we demonstrate that increasing variability in training tasks can significantly improve classification performance. Further, we apply data augmentation in conjunction with meta-learning to reduce sampling bias. We make use of a conditional generator for data augmentation that is trained directly using the meta-learning objective and simultaneously with prototypical networks, hence ensuring that data augmentation is customized to the task. We explore augmentation in the sentence embedding space as well as prototypical embedding space. Combining meta-learning with augmentation provides upto 6.49% and 8.53% relative F1-score improvements over the best performing systems in the 5-shot and 10-shot learning, respectively.'\naddress: |\n $^1$ Signal Analysis and Interpretation Lab, USC, Los Angeles, CA\\\n $^2$ Amazon Alexa, Cambridge, MA\nbibliography:\n- 'protoda.bib'\ntitle: 'PROTODA: EFFICIENT TRANSFER LEARNING FOR FEW-SHOT INTENT CLASSIFICATION'\n---\n\nmeta learning, prototypical" +"---\nabstract: 'Dimension of the encoder output (i.e., the code layer) in an autoencoder is a key hyper-parameter for representing the input data in a proper space. This dimension must be carefully selected in order to guarantee the desired reconstruction accuracy. Although overcomplete representation can address this dimension issue, the computational complexity will increase with dimension. Inspired by non-parametric methods, here, we propose a metalearning approach to increase the number of basis vectors used in dynamic sparse coding on the fly. An actor-critic algorithm is deployed to automatically choose an appropriate dimension for feature vectors regarding the required level of accuracy. The proposed method benefits from online dictionary learning and fast iterative shrinkage-thresholding algorithm (FISTA) as the optimizer in the inference phase. It aims at choosing the minimum number of bases for the overcomplete representation regarding the reconstruction error threshold. This method allows for online controlling of both the representation dimension and the reconstruction error in a dynamic framework.'\nauthor:\n- 'Pedram\u00a0Fekri, Ali\u00a0Akbar\u00a0Safavi, Mehrdad\u00a0Hosseini\u00a0Zadeh, and\u00a0Peyman\u00a0Setoodeh [^1] [^2] [^3]'\nbibliography:\n- 'IEEEabrv.bib'\n- 'Main.bib'\ntitle: 'Metalearning: Sparse Variable-Structure Automata'\n---\n\nSparse coding, metalearning, variable-structure automata.\n\nIntroduction {#intro}\n============\n\nSparse coding represents inputs by generating" +"---\nbibliography:\n- 'rr.bib'\n- 'sample.bib'\n- 'sample\\_.bib'\n- 'Bays.bib'\n- '2018.bib'\n- '2020June.bib'\n- 'covid.bib'\n- 'Chandra-Rohitash.bib'\n- 'aicrg.bib'\n- 'usyd.bib'\n---\n\n[**** ]{}\\\nRohitash Chandra ^1\\ \\*^, Ayush Jain ^2^ , Divyanshu Singh Chauhan ^3^\\\n\n1\\. UNSW Data Science Hub & School of Mathematics and Statistics, University of New South Wales, Sydney, Australia\\\n2. Department of Electronics and Electrical Engineering, Indian Institute of Technology Guwahati, Assam, India\\\n3. Department of Mechanical Engineering, Indian Institute of Technology Guwahati, Assam, India\\\n\nThese authors contributed equally to this work. \\* Corresponding author\\\nE-mail: rohitash.chandra@unsw.edu.au (RC)\n\nAbstract {#abstract .unnumbered}\n========\n\nThe COVID-19 pandemic continues to have major impact to health and medical infrastructure, economy, and agriculture. Prominent computational and mathematical models have been unreliable due to the complexity of the spread of infections. Moreover, lack of data collection and reporting makes modelling attempts difficult and unreliable. Hence, we need to re-look at the situation with reliable data sources and innovative forecasting models. Deep learning models such as recurrent neural networks are well suited for modelling spatiotemporal sequences. In this paper, we apply recurrent neural networks such as long short term memory (LSTM), bidirectional LSTM, and encoder-decoder LSTM models for multi-step (short-term) COVID-19" +"---\nabstract: 'We demonstrate the utility of the Multi-Level Intermediate Representation (MLIR) for quantum computing. Specifically, we extend MLIR with a new quantum dialect that enables the expression and compilation of common quantum assembly languages. The true utility of this dialect is in its ability to be lowered to the LLVM intermediate representation (IR) in a manner that is adherent to the quantum intermediate representation (QIR) specification recently proposed by Microsoft. We leverage a `qcor`-enabled implementation of the QIR quantum runtime API to enable a retargetable (quantum hardware agnostic) compiler workflow mapping quantum languages to hybrid quantum-classical binary executables and object code. We evaluate and demonstrate this novel compiler workflow with quantum programs written in OpenQASM 2.0. We provide concrete examples detailing the generation of MLIR from OpenQASM source files, the lowering process from MLIR to LLVM IR, and ultimately the generation of executable binaries targeting available quantum processors.'\nauthor:\n- \nbibliography:\n- 'main.bib'\ntitle: 'A MLIR Dialect for Quantum Assembly Languages [^1] '\n---\n\nquantum computing, quantum programming, quantum simulation, programming languages\n\nIntroduction\n============\n\nThe availability of noisy quantum processing units (QPUs) from a variety of hardware vendors has raised new research and development questions into application use cases," +"---\nbibliography:\n- 'asyn-theory.bib'\n- 'constrained-online.bib'\n---\n\n0.16 true in by 0.16 true in\n\n[c]{}Xiaohan Wei\\\n\n------------------------------------------------------------------------\n\n1.0 true in Presented to the\\\nFACULTY OF THE USC GRADUATE SCHOOL\\\nUNIVERSITY OF SOUTHERN CALIFORNIA\\\nIn Partial Fulfillment of the\\\nRequirements for the Degree\\\nDOCTOR OF PHILOSOPHY\\\n\\\n\n[\u00a0Copyright\u00a0 2019 \u00a0\u00a0Xiaohan Wei]{}\n\n {#section .unnumbered}\n\nApproved by\\\nProfessor Michael Neely,\\\nCommittee Chair,\\\nDepartment of Electrical Engineering,\\\n*University of Southern California*.\\\nProfessor Stanislav Minsker,\\\nCommittee Chair,\\\nDepartment of Mathematics,\\\n*University of Southern California*.\\\nProfessor Larry Goldstein,\\\nDepartment of Mathematics,\\\n*University of Southern California*.\\\nProfessor Mihailo Jovanovic,\\\nDepartment of Electrical Engineering,\\\n*University of Southern California*.\\\nProfessor Ashutosh Nayyar,\\\nDepartment of Electrical Engineering,\\\n*University of Southern California*.\\\n\nDedication {#dedication .unnumbered}\n==========\n\nTo my parents and my wife, Yuhong, who supported me both mentally and financially over the years.\n\nAcknowledgements {#acknowledgements .unnumbered}\n================\n\nFirst, I would like to thank my advisor professor Michael J. Neely for guiding me throughout the PhD journey since Summer 2013. He is a man of accuracy and rigorousness, always passionate about discussing concrete research problems, and willing to roll up the sleeves and grind through technical details with me. His way of treating research topics significantly impacts me. Rather than blindly" +"---\nabstract: 'Quantifying entanglement properties of mixed states in quantum field theory via entanglement of purification and reflected entropy is a new and challenging subject. In this work, we study both quantities for two spherical subregions far away from each other in the vacuum of a conformal field theory in any number of dimensions. Using lattice techniques, we find an elementary proof that the decay of both, the entanglement of purification and reflected entropy, is enhanced with respect to the mutual information behaviour by a logarithm of the distance between the subregions. In the case of the Ising spin chain at criticality and the related free fermion conformal field theory, we compute also the overall coefficients numerically for the both quantities of interest.'\nauthor:\n- 'Hugo A. Camargo'\n- Lucas Hackl\n- 'Michal P. Heller'\n- Alexander Jahn\n- Bennet Windt\nbibliography:\n- 'references.bib'\ntitle: |\n Long-distance entanglement of purification and reflected entropy\\\n in conformal field theory\n---\n\n[[*[**Introduction**]{}.*]{}]{} Understanding quantum information properties of quantum field theory (QFT) and, through holography\u00a0[@Maldacena:1997re; @Gubser:1998bc; @Witten:1998qj], also of gravity has been an important contemporary line of research\u00a0[@Casini:2009sr; @Harlow:2014yka; @Rangamani:2016dms; @Susskind:2018pmk; @Headrick:2019eth]. The main object of interest has been the entanglement entropy" +"---\nabstract: 'We carried out a detailed study of the temporal and broadband spectral behaviour of one of the brightest misaligned active galaxies in $\\gamma$-rays, NGC 1275 utilising $11$ years of [*Fermi*]{}, and available [*Swift*]{} and [*AstroSat*]{} observations. Based on the cumulative flux distribution of the $\\gamma$-ray lightcurve, we identified four distinct activity states and noticed an increase in the baseline flux during the first three states. Similar nature of the increase in the average flux was also noticed in X-ray and UV bands. A large flaring activity in $\\gamma$-rays was noticed in the fourth state. The source was observed twice by [*AstroSat*]{} for shorter intervals ($\\sim$days) during the longer observing periods ($\\sim$years) state 3 and 4. During [*AstroSat*]{} observing periods, the source $\\gamma$-ray flux was higher than the average flux observed during longer duration states. The increase in the average baseline flux from state 1 to state 3 can be explained considering a corresponding increase of jet particle normalisation. The inverse Comptonisation of synchrotron photons explained the average X-ray and $\\gamma$-ray emission by jet electrons during the first three longer duration states. However, during the shorter duration [*AstroSat*]{} observing periods, a shift of the synchrotron peak frequency was noticed," +"---\nabstract: 'Quantifying and comparing patterns of dynamical ecological systems require averaging over measurable quantities. For example, to infer variation in movement and behavior, metrics such as step length and velocity are averaged over large ensembles. Yet, in nonergodic systems such averaging is inconsistent; thus, identifying ergodicity breaking is essential in ecology. Using rich high-resolution movement datasets ($>\\! 7 \\times 10^7$ localizations) from 70 individuals and continuous-time random walk modeling, we find subdiffusive behavior and ergodicity breaking in the localized movement of three species of avian predators. Small-scale, within-patch movement was found to be qualitatively different, not inferrable and separated from large-scale inter-patch movement. Local search is characterized by long power-law-distributed waiting times with diverging mean, giving rise to ergodicity breaking in the form of considerable variability uniquely observed at this scale. This implies that wild animal movement is scale specific with no typical waiting time at the local scale.'\nauthor:\n- 'Ohad Vilk$^{a,b,c}$'\n- 'Yotam Orchan$^{b, c}$'\n- 'Motti Charter$^{b, c, d}$'\n- 'Nadav Ganot$^{b, c}$'\n- 'Sivan Toledo$^{c, e}$'\n- 'Ran Nathan$^{b, c}$'\n- 'Michael Assaf$^{a}$'\nbibliography:\n- 'references.bib'\ntitle: 'Ergodicity breaking in area-restricted search of avian predators'\n---\n\nINTRODUCTION\n============\n\nMovement of organisms is of key interest" +"---\nabstract: 'We present a new model of collective decision making that captures important crowd-funding and donor coordination scenarios. In the setting, there is a set of projects (each with its own cost) and a set of agents (that have their budgets as well as preferences over the projects). An outcome is a set of projects that are funded along with the specific contributions made by the agents. For the model, we identify meaningful axioms that capture concerns including fairness, efficiency, and participation incentives. We then propose desirable rules for the model and study, which sets of axioms can be satisfied simultaneously. An experimental study indicates the relative performance of different rules as well as the price of enforcing fairness axioms.'\nauthor:\n- Haris Aziz\n- Aditya Ganguly\ntitle: |\n Participatory Funding Coordination:\\\n Model, Axioms and Rules\n---\n\nIntroduction\n============\n\nConsider a scenario in which a group of house-mates want to pitch in money to buy some common items for the house but not every item is of interest or use to everyone. Each of the items (e.g. TV, video game console, music system, etc.) has its price. Each resident would like to have as many items purchased that are" +"---\nabstract: 'We developed a noncontact measurement system for monitoring the respiration of multiple people using millimeter-wave array radar. To separate the radar echoes of multiple people, conventional techniques cluster the radar echoes in the time, frequency, or spatial domain. Focusing on the measurement of the respiratory signals of multiple people, we propose a method called respiratory-space clustering, in which individual differences in the respiratory rate are effectively exploited to accurately resolve the echoes from human bodies. The proposed respiratory-space clustering can separate echoes, even when people are located close to each other. In addition, the proposed method can be applied when the number of targets is unknown and can accurately estimate the number and positions of people. We perform multiple experiments involving five or seven participants to verify the performance of the proposed method, and quantitatively evaluate the estimation accuracy for the number of people and the respiratory intervals. The experimental results show that the average root-mean-square error in estimating the respiratory interval is 196 ms using the proposed method. The use of the proposed method, rather the conventional method, improves the accuracy of the estimation of the number of people by 85.0%, which indicates the effectiveness of the" +"---\nabstract: 'We study the localization properties of generalized, two- and three-dimensional Lieb lattices, $\\mathcal{L}_2(n)$ and $\\mathcal{L}_3(n)$, $n= 1, 2, 3$ and $4$, at energies corresponding to flat and dispersive bands using the transfer matrix method (TMM) and finite size scaling (FSS). We find that the scaling properties of the flat bands are different from scaling in dispersive bands for all $\\mathcal{L}_d(n)$. For the $d=3$ dimensional case, states are extended for disorders $W$ down to $W=0.01 t$ at the flat bands, indicating that the disorder can lift the degeneracy of the flat bands quickly. The phase diagram with periodic boundary condition for $\\mathcal{L}_3(1)$ looks similar to the one for hard boundaries [@Liu2020LocalizationLattices]. We present the critical disorder $W_c$ at energy $E=0$ and find a decreasing $W_c$ for increasing $n$ for $\\mathcal{L}_3(n)$, up to $n=3$. Last, we show a table of FSS parameters including so-called irrelevant variables; but the results indicate that the accuracy is too low to determine these reliably.'\naddress:\n- 'School of Physics and Optoelectronics, Xiangtan University, Xiangtan 411105, China'\n- 'Department of Physics, University of Warwick, Coventry, CV4 7AL, United Kingdom'\nauthor:\n- Jie Liu\n- Xiaoyu Mao\n- Jianxin Zhong\n- 'Rudolf A.\u00a0R\u00f6mer'\ntitle: Localization" +"---\nabstract: 'We study a new class of quiver algebras on surfaces, called \u2018geodesic ghor algebras\u2019. These algebras generalize cancellative dimer algebras on a torus to higher genus surfaces, where the relations come from perfect matchings rather than a potential. Although cancellative dimer algebras on a torus are noncommutative crepant resolutions, the center of any dimer algebra on a higher genus surface is just the polynomial ring in one variable, and so the center and surface are unrelated. In contrast, we establish a rich interplay between the central geometry of geodesic ghor algebras and the topology of the surface in which they are embedded. Furthermore, we show that noetherian central localizations of such algebras are endomorphism rings of modules over their centers.'\naddress:\n- 'School of Mathematics, University of Leeds, Leeds, LS2 9JT, United Kingdom'\n- On leave from the University of Graz\n- 'Institut f\u00fcr Mathematik und Wissenschaftliches Rechnen, Universit\u00e4t Graz, Heinrichstrasse 36, 8010 Graz, Austria.'\nauthor:\n- Karin Baur\n- Charlie Beil\ntitle: A generalization of cancellative dimer algebras to hyperbolic surfaces\n---\n\nIntroduction\n============\n\nCancellative dimer algebras on a torus have been extensively studied in the contexts of noncommutative resolutions, Calabi-Yau algebras, and stability conditions, e.g., [@Br;" +"---\nabstract: 'Let $N$ be a connected nonorientable surface with or without boundary and punctures, and $j\\colon S\\rightarrow N$ be the orientation double covering. It has previously been proved that the orientation double covering $j$ induces an embedding $\\iota\\colon\\mathrm{Mod}(N)$ $\\hookrightarrow$ $\\mathrm{Mod}(S)$ with one exception. In this paper, we prove that this injective homomorphism $\\iota$ is a quasi-isometric embedding. The proof is based on the semihyperbolicity of $\\mathrm{Mod}(S)$, which has already been established. We also prove that the embedding $\\mathrm{Mod}(F'') \\hookrightarrow \\mathrm{Mod}(F)$ induced by an inclusion of a pair of possibly nonorientable surfaces $F'' \\subset F$ is a quasi-isometric embedding.'\naddress:\n- ' (Takuya Katayama) Department of Mathematics, Faculty of Science, Gakushuin University, 1-5-1 Mejiro, Toshima-ku, Tokyo 171-8588, Japan '\n- ' (Erika Kuno) Department of Mathematics, Graduate School of Science, Osaka University, 1-1 Machikaneyama-cho Toyonaka, Osaka 560-0043, Japan '\nauthor:\n- Takuya Katayama\n- Erika Kuno\ntitle: 'The mapping class group of a nonorientable surface is quasi-isometrically embedded in the mapping class group of the orientation double cover'\n---\n\nIntroduction {#Introduction}\n============\n\nLet $S=S_{g,p}^{b}$ be the compact connected orientable surface of genus $g$ with $b$ boundary components and $p$ punctures, and $N=N_{g,p}^{b}$ be the compact connected nonorientable surface of genus" +"---\nabstract: 'In many research areas, for example motion and gesture generation, objective measures alone do not provide an accurate impression of key stimulus traits such as perceived quality or appropriateness. The gold standard is instead to evaluate these aspects through user studies, especially subjective evaluations of video stimuli. Common evaluation paradigms either present individual stimuli to be scored on Likert-type scales, or ask users to compare and rate videos in a pairwise fashion. However, the time and resources required for such evaluations scale poorly as the number of conditions to be compared increases. Building on standards used for evaluating the quality of multimedia codecs, this paper instead introduces a framework for granular rating of multiple comparable videos in parallel. This methodology essentially analyses all condition pairs at once. Our contributions are 1) a proposed framework, called HEMVIP, for parallel and granular evaluation of multiple video stimuli and 2) a validation study confirming that results obtained using the tool are in close agreement with results of prior studies using conventional multiple pairwise comparisons.'\nauthor:\n- Patrik Jonell\n- Youngwoo Yoon\n- Pieter Wolfert\n- Taras Kucherenko\n- Gustav Eje Henter\nbibliography:\n- 'ref.bib'\ntitle: 'HEMVIP: Human Evaluation of Multiple Videos" +"---\nabstract: 'To better understand the process by which humans make navigation decisions when tasked with multiple stopovers, we analyze motion data captured from shoppers in a grocery store. We discover several trends in the data that are consistent with a noisy decision making process for the order of item retrieval, and decompose a shopping trip into a sequence of discrete choices about the next item to retrieve. Our analysis reveals that the likelihood of inverting any two items in the order is monotonically bound to the information-theoretic entropy of the pair-wise ordering task. Based on this analysis, we propose a noisy distance estimation model for predicting the order of item retrieval given a shopping list. We show that our model theoretically reproduces the entropy-governed trend seen in the data with high accuracy, and in practice matches the trends in the data when used to simulate the same shopping lists. Our approach has direct applications to improving simulations of human navigation in retail and other settings.'\nauthor:\n- 'Nicholas Sohre$^{1}$'\n- 'Alisdair O. G. Wallis$^{2}$'\n- 'Stephen J. Guy$^{1}$'\nbibliography:\n- 'apssamp.bib'\ntitle: 'An Information-Theoretic Law Governing Human Multi-Task Navigation Decisions'\n---\n\nIntroduction\\[intro\\]\n=====================\n\nUnderstanding human flow through indoor buildings" +"---\nabstract: 'The recently commissioned Dark Energy Spectroscopic Instrument (DESI) will measure the expansion history of the universe using the Baryon Acoustic Oscillation technique. The spectra of 35 million galaxies and quasars over 14000 sq deg will be measured during the life of the experiment. A new prime focus corrector for the KPNO Mayall telescope delivers light to 5000 fiber optic positioners. The fibers in turn feed ten broad-band spectrographs. We describe key aspects and lessons learned from the development, delivery and installation of the fiber system at the Mayall telescope.'\nauthor:\n- Claire Poppett\n- Patrick Jelinsky\n- Julien Guy\n- Jerry Edelstein\n- Sharon Jelinsky\n- Jessica Aguilar\n- Ray Sharples\n- Jurgen Schmoll\n- David Bramall\n- Luke Tyas\n- Paul Martini\n- Kevin Fanning\n- Michael Levi\n- David Brooks\n- Peter Doel\n- Duan Yutong\n- Gregory Tarle\n- 'Erique Gazta$\\tilde{\\text{n}}$aga'\n- Francisco Prada\n- the DESI Collaboration\nbibliography:\n- 'report.bib'\ntitle: 'Performance of the Dark Energy Spectroscopic Instrument (DESI) Fiber System'\n---\n\nINTRODUCTION {#sec:intro}\n============\n\nThe Dark Energy Spectroscopic Instrument (DESI) is a fiber-fed spectroscopic instrument installed on the 4-meter Mayall telescope at Kitt Peak National Observatory (KPNO). During its 5 year survey, DESI" +"---\nabstract: 'We provide sufficient conditions so that a homeomorphism of the real line or of the circle admits an extension to a mapping of finite distortion in the upper half-plane or the disk, respectively. Moreover, we can ensure that the quasiconformal dilatation of the extension satisfies certain integrability conditions, such as $p$-integrability or exponential integrability. Mappings satisfying the latter integrability condition are also known as David homeomorphisms. Our extension operator is the same as the one used by Beurling and Ahlfors in their celebrated work. We prove an optimal bound for the quasiconformal dilatation of the Beurling\u2013Ahlfors extension of a homeomorphism of the real line, in terms of its symmetric distortion function. More specifically, the quasiconformal dilatation is bounded above by an average of the symmetric distortion function and below by the symmetric distortion function itself. As a consequence, the quasiconformal dilatation of the Beurling\u2013Ahlfors extension of a homeomorphism of the real line is (sub)exponentially integrable, is $p$-integrable, or has a $BMO$ majorant if and only if the symmetric distortion is (sub)exponentially integrable, is $p$-integrable, or has a $BMO$ majorant, respectively. These theorems are all new and reconcile several sufficient extension conditions that have been established in the past.'" +"---\nauthor:\n- 'E. Fokken'\n- 'S. G\u00f6ttlich'\nbibliography:\n- './bibliography.bib'\ntitle: 'On the relation of powerflow and Telegrapher\u2019s equations: continuous and numerical Lyapunov stability'\n---\n\n\\\n[**AMS subject classifications:**]{} 93D05, 65M06\\\n\\[0.5ex\\] [**Keywords:**]{} power networks, Lyapunov function, stability, numerical approximations\n\nIntroduction {#sec:introduction}\n============\n\nIn recent years, due to the need to restructure energy systems to incorporate more renewable energy sources renewed interest into energy-systems has sparked. In the case of electric transmission lines several mathematical approaches can be found in the literature, see for example [@Andersson15book; @doi:10.1137/1.9781611974164; @Frank16; @Gottlich2021] for an overview. All modeling approaches rely on a graph structure where generators of electrical powers and consumers at nodes are connected by transmission lines. From a physical point of view, voltage and current are transported along the lines while power loss might occur due to resistances. In many applications, the so-called power flow equations provide a well-established tool to analyze the performance of electric transmission lines over time. Mathematically, the resulting nonlinear system of equations is typically solved via Newton\u2019s method. Another approach to study not only the temporal resolution of power flow but also the spatial resolution in transmission lines are the spatially one-dimensional Telegrapher\u2019s equations. These equations" +"---\naddress: |\n $^{\\text{\\sf 1}}$SenseTime Research, Shanghai, 200233, China,\\\n $^{\\text{\\sf 2}}$School of Computer Science and Engineering, Central South University, Changsha, 410083, China,\\\n $^{\\text{\\sf 3}}$Qing yuan Research Institute, Shanghai Jiao Tong University, Shanghai, China\\\n $ $\\\n $^{\\text{\\sf \\dag}}$These authors contributed equally to this work. \nauthor:\n- 'Yifan Wu$^{\\text{\\sfb 1, 2, \\dag}}$, Min Gao$^{\\text{\\sfb 1, \\dag}}$, Min Zeng$^{\\text{2,}}$, Feiyang Chen$^{\\text{\\sfb 1}}$, Min Li$^{\\text{\\sfb 2,}*}$ and Jie Zhang$^{\\text{\\sfb 1,3,}*}$'\nbibliography:\n- 'Document.bib'\nsubtitle: Subject Section\ntitle: 'BridgeDPI: A Novel Graph Neural Network for Predicting Drug-Protein Interactions'\n---\n\nIntroduction\n============\n\n![image](figure/fig1_.png)\n\nThe drug discovery and drug screening are complex. The typical timeline usually takes 10-20 years and costs US\\$0.5-2.6 billion [@avorn20152; @paul2010improve]. Among them, exploring possible drug-protein interactions (DPIs) is a crucial step. Although experimental assays remain the most reliable approach for determining DPIs, they are time-consuming and cost-intensive. Therefore, efficient computational methods for predicting protein-drug interactions are significant and urgently demanded.\n\nCurrent DPI prediction methods can be summarized as three forms: docking-based methods, machine learning-based methods, and deep learning-based methods. Docking-based methods look for the best binding position inside the binding pocket of the proteins for drug molecules [@2015Molecular; @2017Protein]. However, it takes a lot of time and lacks available 3D protein structures" +"---\nabstract: 'Anelastic convection at high Rayleigh number in a plane parallel layer with no slip boundaries is considered. Energy and entropy balance equations are derived, and they are used to develop scaling laws for the heat transport and the Reynolds number. The appearance of an entropy structure consisting of a well-mixed uniform interior, bounded by thin layers with entropy jumps across them, makes it possible to derive explicit forms for these scaling laws. These are given in terms of the Rayleigh number, the Prandtl number, and the bottom to top temperature ratio, which measures how stratified the layer is. The top and bottom boundary layers are examined and they are found to be very different, unlike in the Boussinesq case. Elucidating the structure of these boundary layers plays a crucial part in determining the scaling laws. Physical arguments governing these boundary layers are presented, concentrating on the case in which the boundary layers are thin even when the stratification is large, the incompressible boundary layer case. Different scaling laws are found, depending on whether the viscous dissipation is primarily in the boundary layers or in the bulk. The cases of both high and low Prandtl number are considered. Numerical" +"---\nabstract: 'We establish the twisted crystallographic T-duality, which is an isomorphism between Freed-Moore twisted equivariant K-groups of the position and momentum tori associated to an extension of a crystallographic group. The proof is given by identifying the map with the Dirac homomorphism in twisted Chabert\u2013Echterhoff KK-theory. We also illustrate how to exploit it in K-theory computations.'\naddress:\n- 'Department of Mathematics, Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, Tokyo, 152-8551, Japan.'\n- 'Department of Mathematical Sciences, Shinshu University, 3-1-1 Asahi, Matsumoto, Nagano, 390-8621, Japan / RIKEN iTHEMS Program, 2-1 Hirosawa, Wako, Saitama, 351-0198, Japan'\n- 'Beijing International Center for Mathematical Research, Peking University, 5 Yiheyuan Rd, Beijing, China'\nauthor:\n- Kiyonori Gomi\n- Yosuke Kubota\n- Guo Chuan Thiang\nbibliography:\n- 'ref.bib'\ntitle: 'Twisted crystallograpic T-duality via the Baum\u2013Connes isomorphism'\n---\n\nIntroduction\n============\n\nT-duality arose as a certain equivalence between string theories compactified on a circle and on a dual circle [@buscher1987symmetry]. That a rich mathematical structure lies behind *topological T-duality* became apparent from [@bouwknegtDualityTopologyChange2004], where it was found that the K-theories of a circle bundle twisted by an $H$-flux, coincided with those of a dually fibered one with a dual twist. Furthermore, a $C^*$-algebraic formulation [@mathaiDualityTorusBundles2005] revealed the" +"---\nabstract: 'Thin films of topological insulators (TI) attract large attention because of expected topological effects from the inter-surface hybridization of Dirac points. However, these effects may be depleted by unexpectedly large energy smearing $\\Gamma$ of surface Dirac points by the random potential of abundant Coulomb impurities. We show that in a typical TI film with large dielectric constant $\\sim 50$ sandwiched between two low dielectric constant layers, the Rytova-Chaplik-Entin-Keldysh modification of the Coulomb potential of a charge impurity allows a larger number of the film impurities to contribute to $\\Gamma$. As a result, $\\Gamma$ is large and independent of the TI film thickness $d$ for $d > 5$ nm. In thinner films $\\Gamma$ grows with decreasing $d$ due to reduction of screening by the hybridization gap. We study the surface conductivity away from the neutrality point and at the neutrality point. In the latter case, we find the maximum TI film thickness at which the hybridization gap is still able to make a TI film insulating and allow observation of the quantum spin Hall effect, $d_{\\max} \\sim 7$ nm.'\nauthor:\n- 'Yi Huang\u00a0(\u9ec4\u5955)'\n- 'B.I. Shklovskii'\ntitle: Disorder effects in topological insulator thin films\n---\n\n[UTF8]{}[gbsn]{}\n\nIntroduction\n============" +"---\nabstract: |\n We extend Ghys\u2019 theory about semiconjugacy to the world of measurable cocycles. More precisely, given a measurable cocycle with values into $\\textup{Homeo}^+(\\mathbb{S}^1)$, we can construct a $\\textup{L}^\\infty$-parametrized Euler class in bounded cohomology. We show that such a class vanishes if and only if the cocycle can be lifted to $\\textup{Homeo}^+_{\\mathbb{Z}}(\\mathbb{R})$ and it admits an equivariant family of points.\n\n We define the notion of semicohomologous cocycles and we show that two measurable cocycles are semicohomologous if and only if they induce the same parametrized Euler class. Since for minimal cocycles, semicohomology boils down to cohomology, the parametrized Euler class is constant for minimal cohomologous cocycles.\n\n We conclude by studying the vanishing of the real parametrized Euler class and we obtain some results of elementarity.\naddress: 'Section de Math\u00e9matiques, University of Geneva, Rue du Li\u00e8vre 2, 1227 Geneva, Switzerland'\nauthor:\n- 'A. Savini'\nbibliography:\n- 'biblionote.bib'\ndate: '.\u00a0\u00a9[\u00a0The author was partially supported by the FNS grant no. 200020-192216.]{}'\ntitle: Parametrized Euler class and semicohomology theory\n---\n\nIntroduction\n============\n\nOne of the most elementary and at the same time intriguing field in dynamics is the study of *circle actions*. A circle action of a group $\\Gamma$ is a" +"---\nabstract: |\n Landry, Minsky and Taylor defined the taut polynomial of a veering triangulation. Its specialisations generalise the Teichm\u00fcller polynomial of a fibred face of the Thurston norm ball. We prove that the taut polynomial of a veering triangulation is equal to a certain twisted Alexander polynomial of the underlying manifold. Thus the Teichm\u00fcller polynomials are just specialisations of twisted Alexander polynomials. We also give formulas relating the taut polynomial and the untwisted Alexander polynomial. There are two formulas, depending on whether the maximal free abelian cover of a veering triangulation is edge-orientable or not.\n\n Furthermore, we consider 3-manifolds obtained by Dehn filling a veering triangulation. In this case we give formulas that relate the specialisation of the taut polynomial under a Dehn filling and the Alexander polynomial of the Dehn-filled manifold. This extends a theorem of McMullen connecting the Teichm\u00fcller polynomial and the Alexander polynomial to the nonfibred setting, and improves it in the fibred case. We also prove a sufficient and necessary condition for the existence of an orientable fibred class in the cone over a fibred face of the Thurston norm ball.\naddress: |\n Mathematics Institute, University of Warwick, Coventry CV4 7AL, United Kingdom\\\n Mathematical Institute," +"---\nabstract: 'The numerical solution of a linear Schr\u00f6dinger equation in the semiclassical regime is very well understood in a torus $\\BB{T}^d$. A raft of modern computational methods are precise and affordable, while conserving energy and resolving high oscillations very well. This, however, is far from the case with regard to its solution in $\\BB{R}^d$, a setting more suitable for many applications. In this paper we extend the theory of splitting methods to this end. The main idea is to derive the solution using a spectral method from a combination of solutions of the free Schr\u00f6dinger equation and of linear scalar ordinary differential equations, in a symmetric Zassenhaus splitting method. This necessitates detailed analysis of certain orthonormal spectral bases on the real line and their evolution under the free Schr\u00f6dinger operator.'\nauthor:\n- |\n Arieh Iserles\\\n Department of Applied Mathematics and Theoretical Physics\\\n Centre for Mathematical Sciences\\\n University of Cambridge\\\n Wilberforce Rd, Cambridge CB4 1LE\\\n United Kingdom\n- |\n Karolina Kropielnicka\\\n Institute of Mathematics\\\n Polish Academy of Sciences\\\n Antoniego Abrahama 18, 81-825 Sopot\\\n Poland\n- |\n Katharina Schratz\\\n Laboratoire Jacques-Louis Lions\\\n Sorbonne Universit\u00e9\\\n 4 place Jussieu, 75252 Paris\\\n France\n- |\n Marcus Webb\\\n Department of Mathematics\\\n University of Manchester\\\n Alan" +"---\nabstract: 'In recent years, artificial neural networks and their applications for large data sets have became a crucial part of scientific research. In this work, we implement the Multilayer Perceptron (MLP), which is a class of feedforward artificial neural network (ANN), to predict ground-state binding energies of atomic nuclei. Two different MLP architectures with three and four hidden layers are used to study their effects on the predictions. To train the MLP architectures, two different inputs are used along with the latest atomic mass table and changes in binding energy predictions are also analyzed in terms of the changes in the input channel. It is seen that using appropriate MLP architectures and putting more physical information in the input channels, MLP can make fast and reliable predictions for binding energies of atomic nuclei, which is also comparable to the microscopic energy density functionals.'\nauthor:\n- Esra Y\u00fcksel\n- Derya Soydaner\n- H\u00fcseyin Bahtiyar\nbibliography:\n- 'apssamp.bib'\ntitle: 'Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron'\n---\n\nINTRODUCTION\n============\n\nOne of the major research areas in nuclear physics is the nuclear mass (binding energy) predictions, especially for nuclei far from the stability line with extreme proton-neutron" +"---\nabstract: 'Nodal lines, as one-dimensional band degeneracies in momentum space, usually feature a linear energy splitting. Here, we propose the concept of magnetic higher-order nodal lines, which are nodal lines with higher-order energy splitting and realized in magnetic systems with broken time reversal symmetry. We provide sufficient symmetry conditions for stabilizing magnetic quadratic and cubic nodal lines, based on which concrete lattice models are constructed to demonstrate their existence. Unlike its counterpart in nonmagnetic systems, the magnetic quadratic nodal line can exist as the only band degeneracy at the Fermi level. We show that these nodal lines can be accompanied by torus surface states, which form a surface band that span over the whole surface Brillouin zone. Under symmetry breaking, these magnetic nodal lines can be transformed into a variety of interesting topological states, such as three-dimensional quantum anomalous Hall insulator, multiple linear nodal lines, and magnetic triple-Weyl semimetal. The three-dimensional quantum anomalous Hall insulator features a Hall conductivity $\\sigma_{xy}$ quantized in unit of $e^2/(h d)$ where $d$ is the lattice constant normal to the $x$-$y$ plane. Our work reveals previously unknown topological states, and offers guidance to search for them in realistic material systems.'\naddress:\n- 'Key Lab" +"---\nabstract: 'The dynamics of the Broad Line Region (BLR) in Active galaxies is an open question, direct observational constraints suggest a predominantly Keplerian motion, with possible traces of inflow or outflow. In this paper we study in detail the physically motivated BLR model of @Czerny2011 based on the radiation pressure acting on dust at the surface layers of accretion disk (AD). We consider here a non-hydrodynamical approach to the dynamics of the dusty cloud under the influence of radiation coming from the entire AD. We use here the realistic description of the dust opacity, and we introduce two simple geometrical models of the local shielding of the dusty cloud. We show that the radiation pressure acting on dusty clouds is strong enough to lead to dynamical outflow from the AD surface, so the BLR has a dynamical character of (mostly failed) outflow. The dynamics strongly depend on the Eddington ratio of the source. Large Eddington ratio sources show a complex velocity field and large vertical velocities with respect to the AD surface, while for lower Eddington ratio sources vertical velocities are small and most of the emission originates close to the AD surface. Cloud dynamics thus determines the 3-D" +"---\nabstract: 'We consider the Ising model on the hexagonal lattice evolving according to Metropolis dynamics. We study its metastable behavior in the limit of vanishing temperature when the system is immersed in a small external magnetic field. We determine the asymptotic properties of the transition time from the metastable to the stable state up to a multiplicative factor and study the mixing time and the spectral gap of the Markov process. We give a geometrical description of the critical configurations and show how not only their size but their shape varies depending on the thermodynamical parameters. Finally we provide some results concerning polyiamonds of maximal area and minimal perimeter.'\naddress:\n- 'Dipartimento di Matematica e Fisica, Universit\u00e0 Roma Tre'\n- 'Dipartimento di Matematica \u201cUlisse Dini\u201d, Universit\u00e0 degli studi di Firenze '\n- 'Dipartimento di Matematica \u201cUlisse Dini\u201d, Universit\u00e0 degli studi di Firenze and Faculteit Wiskunde en Informatica, Technische Universiteit Eindhoven'\n- 'Dipartimento di Matematica \u201cTullio Levi-Civita\u201d, Universit\u00e0 degli Studi di Padova '\nauthor:\n- Valentina Apollonio\n- Vanessa Jacquier\n- Francesca Romana Nardi\n- Alessio Troiani\ntitle: Metastability for the Ising model on the hexagonal lattice\n---\n\nIntroduction\n============\n\nA thermodynamical system, subject to a *noisy dynamics*, exhibits metastable" +"---\nabstract: 'Active Brownian motion with intermittent direction reversals are common in a class of bacteria like [*Myxococcus xanthus*]{} and [*Pseudomonas putida*]{}. We show that, for such a motion in two dimensions, the presence of the two time scales set by the rotational diffusion constant $D_R$ and the reversal rate $\\gamma$ gives rise to four distinct dynamical regimes: (I) $t\\ll \\min (\\gamma^{-1}, D_R^{-1}),$ (II) $\\gamma^{-1}\\ll t\\ll D_R^{-1}$, (III) $D_R^{-1} \\ll t \\ll \\gamma^{-1}$, and (IV) $t\\gg \\max (\\gamma^{-1}$, $D_R^{-1})$, showing distinct behaviors. We characterize these behaviors by analytically computing the position distribution and persistence exponents. The position distribution shows a crossover from a strongly non-diffusive and anisotropic behavior at short-times to a diffusive isotropic behavior via an intermediate regime (II) or (III). In regime (II), we show that, the position distribution along the direction orthogonal to the initial orientation is a function of the scaled variable $z\\propto x_{\\perp}/t$ with a non-trivial scaling function, $f(z)=(2\\pi^3)^{-1/2}\\Gamma(1/4+iz)\\Gamma(1/4-iz)$. Furthermore, by computing the exact first-passage time distribution, we show that a novel persistence exponent $\\alpha=1$ emerges due to the direction reversal in this regime.'\nauthor:\n- Ion Santra\n- Urna Basu\n- Sanjib Sabhapandit\ntitle: Active Brownian Motion with Directional Reversals\n---\n\nActive particles like" +"---\nabstract: 'Forecast India\u2019s economic growth has been traditionally an uncertain exercise. The indicators and factors affecting economic structures and the variables required to model that captures the situation correctly is point of concern. Although the forecast should be specific to the country we are looking at; however countries do have interlinkages among them. As the time series can be more volatile, and sometimes certain variables are unavailable; it is harder to predict for the developing economies as compared to stable and developed nations. However, it is very important to have accurate forecasts for economic growth for successful policy formations. One of the hypothesized indicators is the nighttime lights. Here, we aim to look for a relationship between GDP and Nighttime lights. Specifically we look at the DMSP and VIIRS dataset. We are finding relationship between various measures of economy.'\nauthor:\n- \n- \nbibliography:\n- 'ref.bib'\ntitle: 'Indian economy and Nighttime Lights\\'\n---\n\nRegression, Nighttime Lights, Economy, GDP, Geospatial Analysis\n\nIntroduction\n============\n\nIn ancient times, humans returned to their abodes as soon as the sun went down, wrapping up all activities they indulged in for livelihood. But with the advent of electricity, the scenario is changed. Human activities during the" +"---\nabstract: 'Carbon stars, enhanced in carbon and neutron-capture elements, provide wealth of information about the nucleosynthesis history of the Galaxy. In this work, we present the first ever detailed abundance analysis of carbon star LAMOSTJ091608.81+230734.6 and a detailed abundance analysis of neutron-capture elements for the object LAMOSTJ151003.74+305407.3. Updates on the abundances of elements C, O, Mg, Ca, Cr, Mn and Ni for LAMOSTJ151003.74+305407.3 are also presented. Our analysis is based on high resolution spectra obtained using Hanle Echelle Spectrograph (HESP) attached to the Himalayan Chandra Telescope (HCT), IAO, Hanle. The stellar atmospheric parameters (T$_{eff}$, logg, micro-turbulance ${\\zeta}$, metallicity \\[Fe/H\\]) are found to be (4820, 1.43, 1.62, $-$0.89) and (4500, 1.55, 1.24, $-$1.57) for these two objects respectively. The abundance estimates of several elements, C, N, O, Na, $\\alpha$-elements, Fe-peak elements and neutron-capture elements Rb, Sr, Y, Zr, Ba, La, Ce, Pr, Nd, Sm and Eu are presented. Our analysis shows the star LAMOSTJ151003.74+305407.3 to be a CEMP-r/s star, and LAMOSTJ091608.81+230734.6 a CH giant. We have examined if the i-process model yields (\\[X/Fe\\]) of heavy elements could explain the observed abundances of the CEMP-r/s star based on a parametric model based analysis. The negative values obtained for the neutron density" +"---\nabstract: 'In [*Schubert Puzzles and Integrability I*]{} we proved several \u201cpuzzle rules\u201d for computing products of Schubert classes in $K$-theory (and sometimes equivariant $K$-theory) of $d$-step flag varieties. The principal tool was \u201cquantum integrability\u201d, in several variants of the Yang\u2013Baxter equation; this let us recognize the Schubert structure constants as $q\\to0$ limits of certain matrix entries in products of $R$- (and other) matrices of ${{{\\mathcal U}_q}(\\mathfrak{g}[z^\\pm])}$-representations. In the present work we give direct cohomological interpretations of those same matrix entries but at finite $q$: they compute products of \u201cmotivic Segre classes\u201d, closely related to $K$-theoretic Maulik\u2013Okounkov stable classes living on the [*cotangent bundles*]{} of the flag varieties. Without $q\\to0$, we avoid some divergences that blocked fuller understanding of $d=3,4$. The puzzle computations are then explained (in cohomology only in this work, not $K$-theory) in terms of Lagrangian convolutions between Nakajima quiver varieties. More specifically, the conormal bundle to the diagonal inclusion of a flag variety factors through a quiver variety that is not a cotangent bundle, and it is on [*that*]{} intermediate quiver variety that the $R$-matrix calculation occurs.'\naddress:\n- 'Allen Knutson, Cornell University, Ithaca, New York'\n- 'Paul Zinn-Justin, School of Mathematics and Statistics, The University of" +"---\nabstract: 'Imputation is a popular technique for handling missing data. We consider a nonparametric approach to imputation using the kernel ridge regression technique and [propose consistent variance estimation]{}. The proposed variance estimator is based on a linearization approach which employs the entropy method to estimate the density ratio. The $\\sqrt{n}$-consistency of the imputation estimator is established when a Sobolev space is utilized in the kernel ridge regression imputation, which enables us to develop the proposed variance estimator. Synthetic data experiments are presented to confirm our theory.'\nauthor:\n- Hengfang Wang\n- Jae Kwang Kim\nbibliography:\n- 'ref.bib'\ntitle: Statistical Inference after Kernel Ridge Regression Imputation under item nonresponse\n---\n\n.3in\n\n[**Key words:** Reproducing kernel Hilbert space; Missing data; Nonparametric method]{}\n\nIntroduction\n============\n\nMissing data is a universal problem in statistics. Ignoring the cases with missing values can lead to misleading results [@kim2013statistical; @little2019statistical]. To avoid the potential problem with missing data, imputation is commonly used. After imputation, the imputed dataset can serve as a complete dataset that has no missing values, which in turn makes results from different analysis methods consistent. However, treating imputed data as if observed and applying the standard estimation procedure may result in misleading inference," +"---\nabstract: 'To break the degeneracy among galactic stellar components, we extract kinematic structures using the framework described in @Du2019 [@Du2020]. For example, the concept of stellar halos is generalized to weakly-rotating structures that are composed of loosely bound stars, which can hence be associated to both disk and elliptical type morphologies. By applying this method to central galaxies with stellar mass $10^{10-11.5}\\ M_\\odot$ from the TNG50 simulation, we identify three broadly-defined types of galaxies: ones dominated by disk, by bulge, or by stellar halo structures. We then use the simulation to infer the underlying connection between the growth of structures and physical processes over cosmic time. Tracing galaxies back in time, we recognize three fundamental regimes: an early phase of evolution ($z\\gtrsim2$), and internal and external (mainly mergers) processes that act at later times. We find that disk- and bulge-dominated galaxies are not significantly affected by mergers since $z\\sim2$; the difference in their present-day structures originates from two distinct evolutionary pathways, extended vs. compact, that are likely determined by their parent dark matter halos; i.e., nature. On the other hand, slow rotator elliptical galaxies are typically halo-dominated, forming by external processes (e.g. mergers) in the later phase, i.e., nurture." +"---\nabstract: 'The goal of this paper is to propose two nonlinear variational models for obtaining a refined motion estimation from an image sequence. Both the proposed models can be considered as a part of a generalized framework for an accurate estimation of physics-based flow fields such as rotational and fluid flow. The first model is novel in the sense that it is divided into two phases: the first phase obtains a crude estimate of the optical flow and then the second phase refines this estimate using additional constraints. The correctness of this model is proved using an evolutionary PDE approach. The second model achieves the same refinement as the first model, but in a standard manner, using a single functional. A special feature of our models is that they permit us to provide efficient numerical implementations through the first-order primal-dual Chambolle-Pock scheme. Both the models are compared in the context of accurate estimation of angle by performing an anisotropic regularization of the divergence and curl of the flow respectively. We observe that, although both the models obtain the same level of accuracy, the two-phase model is more efficient. In fact, we empirically demonstrate that the single-phase and the two-phase" +"---\nabstract: 'Recent exploration of the commensurate structure in the turbostratic double layer graphene shows that the large angle twisting can be treated by the decrease of the effective velocity within the energy spectra of the single layer graphene. Within our work, we use this result as a starting point, aiming towards understanding the physics of by a large angle twisted double layer graphene (i.e. Moire) quantum dot systems. We show that within this simple approach using the language of the first quantization, yet another so far unrealized (not up to our knowledge), illustrative property of the commutation relation appears in the graphene physics. Intriguingly, large twisting angles show to be a suitable tunning knob of the position symmetry in the graphene systems. Complete overview of the large angle twisting on the considered dot systems is provided.'\nauthor:\n- Jozef Bucko\n- Franti\u0161ek Herman\nbibliography:\n- 'References.bib'\ntitle: Large twisting angles in Bilayer graphene Moire quantum dot structures\n---\n\nIntroduction\n============\n\nGraphene, elegant honeycomb structured atomic monolayer material, has already shown to have interesting mechanical [@Papageorgiou_2017] as well as electronic [@Neto_2009] properties of the ideal semi-metal. Due to its linear dispersion, related mathematical description as well as resulting physical properties," +"---\nabstract: 'Few-shot learning is a problem of high interest in the evolution of deep learning. In this work, we consider the problem of few-shot object detection (FSOD) in a real-world, class-imbalanced scenario. For our experiments, we utilize the India Driving Dataset (IDD), as it includes a class of less-occurring road objects in the image dataset and hence provides a setup suitable for few-shot learning. We evaluate both metric-learning and meta-learning based FSOD methods, in two experimental settings: (i) representative (same-domain) splits from IDD, that evaluates the ability of a model to learn in the context of road images, and (ii) object classes with less-occurring object samples, similar to the open-set setting in real-world. From our experiments, we demonstrate that the metric-learning method outperforms meta-learning on the novel classes by (i) 11.2 $mAP$ points on the same domain, and (ii) 1.0 $mAP$ point on the open-set. We also show that our extension of object classes in a real-world open dataset offers a rich ground for few-shot learning studies.'\nauthor:\n- |\n **Anay Majee[^1], Kshitij Agrawal, Anbumani Subramanian**\\\n Intel Corporation\\\n {anay.majee, kshitij.agrawal, anbumani.subramanian}@intel.com\nbibliography:\n- 'references.bib'\ntitle: 'Few-Shot Learning for Road Object Detection'\n---\n\nIntroduction\n============\n\nThe human visual system can" +"---\nabstract: 'In nearly compensated graphene, disorder-assisted electron-phonon scattering or \u201csupercollisions\u201d are responsible for both quasiparticle recombination and energy relaxation. Within the hydrodynamic approach, these processes contribute weak decay terms to the continuity equations at local equilibrium, i.e., at the level of \u201cideal\u201d hydrodynamics. Here we report the derivation of the decay term due to weak violation of energy conservation. Such terms have to be considered on equal footing with the well-known recombination terms due to nonconservation of the number of particles in each band. At high enough temperatures in the \u201chydrodynamic regime\u201d supercollisions dominate both types of the decay terms (as compared to the leading-order electron-phonon interaction). We also discuss the contribution of supercollisions to the heat transfer equation (generalizing the continuity equation for the energy density in viscous hydrodynamics).'\nauthor:\n- 'B.N. Narozhny'\n- 'I.V. Gornyi'\nbibliography:\n- 'viscosity\\_refs.bib'\ntitle: 'Hydrodynamic approach to electronic transport in graphene: energy relaxation'\n---\n\nElectronic hydrodynamics is quickly growing into a mature field of condensed matter physics [@pg; @rev; @luc]. Similarly to the usual hydrodynamics [@dau6; @chai], this approach offers a universal, long-wavelength description of collective flows in interacting many-electron systems. As a macroscopic theory of strongly interacting systems, hydrodynamics should appear" +"---\nabstract: 'Combinatorial Game Theory has also been called \u2018additive game theory\u2019, whenever the analysis involves sums of independent game components. Such [*disjunctive sums*]{} invoke comparison between games, which allows abstract values to be assigned to them. However, there are rulesets with [*entailing moves*]{} that break the alternating play axiom and/or restrict the other player\u2019s options within the disjunctive sum components. These situations are exemplified in the literature by a ruleset such as [nimstring]{}, a normal play variation of the classical children\u2019s game [dots&boxes]{}, and [top\u00a0entails]{}, an elegant ruleset introduced in the classical work Winning Ways, by Berlekamp Conway and Guy. Such rulesets fall outside the scope of the established normal play theory. Here, we axiomatize normal play via two new terminating games, ${\\ensuremath{{{\\boldsymbol}\\infty }}}$ (Left wins) and ${\\ensuremath{\\overline{\\infty }}}$ (Right wins), and a more general theory is achieved. We define [*affine impartial*]{}, which extends classical impartial games, and we analyze their algebra by extending the established Sprague-Grundy theory, with an accompanying minimum excluded rule. Solutions of [nimstring]{} and [top\u00a0entails]{} are given to illustrate the theory.'\n---\n\n\\\n[School of Computing, National University of Singpore, Singapore]{}\\\n[**Richard J.\u00a0Nowakowski[^1]**]{}\\\n[Department of Mathematics and Statistics, Dalhousie University, Canada]{}\\\n[**Carlos" +"---\nabstract: 'We consider a randomised version of Kleene\u2019s realisability interpretation of intuitionistic arithmetic in which computability is replaced with randomised computability with positive probability. In particular, we show that (i) the set of randomly realisable statements is closed under intuitionistic first-order logic, but (ii) different from the set of realisable statements, that (iii) \u201crealisability with probability 1\u201d is the same as realisability and (iv) that the axioms of bounded Heyting\u2019s arithmetic are randomly realisable, but some instances of the full induction scheme fail to be randomly realisable.'\nauthor:\n- Merlin Carl\n- Lorenzo Galeotti\n- Robert Passmann\nbibliography:\n- 'REFERENCES.bib'\ntitle: 'Randomising Realisability[^1]'\n---\n\nIntroduction\n============\n\nHave you met skeptical Steve? Being even more skeptical than most mathematicians, he only believes what he actually sees. To convince him that there is an $x$ such that $A$, you have to give him an example, together with evidence that $A$ holds for that example. To convince him that $A \\rightarrow B$, you have to show him a *method* for turning evidence of $A$ into evidence of $B$, and so on. Given that Steve is [\u201ca man provided with paper, pencil, and rubber, and subject to strict discipline\u201d]{} [@IntelligentM1], we can read" +"---\nabstract: 'In previous work, we established theoretical results concerning the effect of matter shells surrounding a gravitational wave (GW) source, and we now apply these results to astrophysical scenarios. Firstly, it is shown that GW echoes that are claimed to be present in LIGO data of certain events, could not have been caused by a matter shell. However, it is also shown that there are scenarios in which matter shells could make modifications of order a few percent to a GW signal; these scenarios include binary black hole mergers, binary neutron star mergers, and core collapse supernovae.'\nauthor:\n- Monos Naidoo\n- 'Nigel\u00a0T. Bishop$^*$'\n- 'Petrus\u00a0J. van der Walt'\nbibliography:\n- 'aeireferences.bib'\n- 'Ref.bib'\ndate: 'Received: date / Accepted: date'\ntitle: Modifications to the signal from a gravitational wave event due to a surrounding shell of matter \n---\n\n[paper-grg3.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore\n\nIntroduction {#intro}\n============\n\nIn previous work\u00a0[@Bishop:2019ckc], we developed a model for the effect of a matter shell around a gravitational wave (GW) source, obtaining an analytic expression for the modifications to the GWs." +"---\nabstract: 'This article studies the joint problem of uplink-downlink scheduling and power allocation for controlling a large number of actuators that upload their states to remote controllers and download control actions over wireless links. To overcome the lack of wireless resources, we propose a machine learning-based solution, where only a fraction of actuators is controlled, while the rest of the actuators are actuated by locally predicting the missing state and/or action information using the previous uplink and/or downlink receptions via a Gaussian process regression (GPR). This GPR prediction credibility is determined using the age-of-information (AoI) of the latest reception. Moreover, the successful reception is affected by the transmission power, mandating a co-design of the communication and control operations. To this end, we formulate a network-wide minimization problem of the average AoI and transmission power under communication reliability and control stability constraints. To solve the problem, we propose a dynamic control algorithm using the Lyapunov drift-plus-penalty optimization framework. Numerical results corroborate that the proposed algorithm can stably control $2$x more number of actuators compared to an event-triggered scheduling baseline with Kalman filtering and frequency division multiple access, which is $18$x larger than a round-robin scheduling baseline.'\nauthor:\n- 'Abanoub M." +"---\nabstract: 'MUSE-based emission-line maps of the spiral galaxy NGC 4030 reveal the existence of unresolved sources with forbidden line emission enhanced with respect to those seen in its own [Hii]{}regions. This study reports our efforts to detect and isolate these objects and identify their nature. Candidates are first detected as unresolved sources on an image of the second principal component of the [H$\\beta$]{}, 5007, [H$\\alpha$]{}, 6584, 6716, 6731 emission-line data cube, where they stand out clearly against both the dominant [Hii]{}region population and the widespread diffuse emission. The intrinsic emission is then extracted accounting for the highly inhomogeneous emission-line \u201cbackground\u201d throughout the field of view. Collisional to recombination line ratios like /[H$\\alpha$]{}, /[H$\\alpha$]{}, and /[H$\\alpha$]{}tend to increase when the background emission is corrected for. We find that many (but not all) sources detected with the principal component analysis have properties compatible with supernova remnants (SNRs). Applying a combined /[H$\\alpha$]{}and /[H$\\alpha$]{}classification criterion leads to a list of 59 sources with SNR-like emission lines. Many of them exhibit conspicuous spectral signatures of SNRs around 7300 \u00c5, and a stacking analysis shows that these features are also present, except weaker, in other cases. At nearly 30 Mpc, these are the" +"---\nabstract: 'In this paper the Feynman Green function for Maxwell\u2019s theory in curved space-time is studied by using the Fock-Schwinger-DeWitt asymptotic expansion; the point-splitting method is then applied, since it is a valuable tool for regularizing divergent observables. Among these, the stress-energy tensor is expressed in terms of second covariant derivatives of the Hadamard Green function, which is also closely linked to the effective action; therefore one obtains a series expansion for the stress-energy tensor. Its divergent part can be isolated, and a concise formula is here obtained: by dimensional analysis and combinatorics, there are two kinds of terms: quadratic in curvature tensors (Riemann, Ricci tensors and scalar curvature) and linear in their second covariant derivatives. This formula holds for every space-time metric; it is made even more explicit in the physically relevant particular cases of Ricci-flat and maximally symmetric spaces, and fully evaluated for some examples of physical interest: Kerr and Schwarzschild metrics and de Sitter space-time.'\nauthor:\n- |\n Roberto Niardi ORCID: 0000-0001-9216-3322,\\\n Dipartimento di Fisica Ettore Pancini\u201d,\\\n Universit\u00e0 degli Studi di Napoli Federico II, Italy\n- |\n Giampiero Esposito ORCID: 0000-0001-5930-8366\\\n Dipartimento di Fisica Ettore Pancini\u201d,\\\n Universit\u00e0 degli Studi di Napoli Federico II, Italy\\\n Complesso Universitario" +"---\nabstract:\n- 'Navigation problems under unknown varying conditions are among the most important and well-studied problems in the control field. Classic model-based adaptive control methods can be applied only when a convenient model of the plant or environment is provided. Recent model-free adaptive control methods aim at removing this dependency by learning the physical characteristics of the plant and/or process directly from sensor feedback. Although there have been prior attempts at improving these techniques, it remains an open question as to whether it is possible to cope with real-world uncertainties in a control system that is fully based on either paradigm. We propose a conceptually simple learning-based approach composed of a full state feedback controller, tuned robustly by a deep reinforcement learning framework based on the Soft Actor-Critic algorithm. We compare it, in realistic simulations, to a model-free controller that uses the same deep reinforcement learning framework for the control of a micro aerial vehicle under wind gust. The results indicate the great potential of learning-based adaptive control methods in modern dynamical systems.'\n- 'The authors thank Dr. Estelle Chauveau from Naval Group for the help provided. This work was supported by SENI, the research laboratory between Naval Group" +"---\nabstract: 'Deep convolutional neural networks, assisted by architectural design strategies, make extensive use of data augmentation techniques and layers with a high number of feature maps to embed object transformations. That is highly inefficient and for large datasets implies a massive redundancy of features detectors. Even though capsules networks are still in their infancy, they constitute a promising solution to extend current convolutional networks and endow artificial visual perception with a process to encode more efficiently all feature affine transformations. Indeed, a properly working capsule network should theoretically achieve higher results with a considerably lower number of parameters count due to intrinsic capability to generalize to novel viewpoints. Nevertheless, little attention has been given to this relevant aspect. In this paper, we investigate the efficiency of capsule networks and, pushing their capacity to the limits with an extreme architecture with barely 160K parameters, we prove that the proposed architecture is still able to achieve state-of-the-art results on three different datasets with only 2% of the original CapsNet parameters. Moreover, we replace dynamic routing with a novel non-iterative, highly parallelizable routing algorithm that can easily cope with a reduced number of capsules. Extensive experimentation with other capsule implementations has proved" +"---\nabstract: 'Compressive sensing (CS) is a signal processing technique that enables sub-Nyquist sampling and near lossless reconstruction of a sparse signal. The technique is particularly appealing for neural signal processing since it avoids the issues relevant to high sampling rate and large data storage. In this project, different CS reconstruction algorithms were tested on raw action potential signals recorded in our lab. Two numerical criteria were set to evaluate the performance of different CS algorithms: Compression Ratio (CR) and Signal-to-Noise Ratio (SNR). In order to do this, individual CS algorithm testing platforms for the EEG data were constructed within MATLAB scheme. The main considerations for the project were the following. 1) Feasibility of the dictionary 2) Tolerance to non-sparsity 3) Applicability of thresholding or interpolation.'\nauthor:\n- \ntitle: Study on Compressed Sensing of Action Potential\n---\n\nCompressive Sesning (CS), Nyquist-Shannon Sampling, Electrocephalography (EEG), Sparsity\n\nIntroduction\n============\n\nElectrophysiological signals present brain activities in the form of electrical signals. Action potential is the activity of single neuron, which consists of rapid polarization and depolarization process. Action potential is key in understanding neuron activities and brain-machine interface applications. Action potential is typically recorded at ten of kilohertz \\[1\\]. Unfortunately, recording action potentials" +"---\nabstract: 'This work investigates continuous time stochastic differential games with a large number of players whose costs and dynamics interact through the empirical distribution of both their states and their controls. The control processes are assumed to be open-loop. We give regularity conditions guaranteeing that if the finite-player game admits a Nash equilibrium, then both the sequence of equilibria and the corresponding state processes satisfy a Sanov-type large deviation principle. The results require existence of a Lipschitz continuous solution of the master equation of the corresponding mean field game, and they carry over to cooperative (i.e. central planner) games. We study a linear-quadratic case of such games in details.'\nauthor:\n- Peng Luo\n- Ludovic Tangpi\nbibliography:\n- 'references-Concen\\_RM.bib'\ntitle: Laplace principle for large population games with control interaction\n---\n\nIntroduction\n============\n\nThis paper is a sequel to [@pontryagin] in which the convergence of symmetric, continuous time stochastic differential games to mean field games was analyzed. Here the goal is to complement the convergence results by deriving large deviation principles (in Laplace form) for the sequence of Nash equilibrium and the associated state processes. Let us briefly describe the stochastic differential game we consider, in its *non-cooperative* version. The" +"---\nabstract: 'We present a new scientific machine learning method that learns from data a computationally inexpensive surrogate model for predicting the evolution of a system governed by a time-dependent nonlinear partial differential equation (PDE), an enabling technology for many computational algorithms used in engineering settings. Our formulation generalizes to the function space PDE setting the Operator Inference method previously developed in \\[B. Peherstorfer and K. Willcox, *Data-driven operator inference for non-intrusive projection-based model reduction*, Computer Methods in Applied Mechanics and Engineering, 306 (2016)\\] for systems governed by ordinary differential equations. The method brings together two main elements. First, ideas from projection-based model reduction are used to explicitly parametrize the learned model by low-dimensional polynomial operators which reflect the known form of the governing PDE. Second, supervised machine learning tools are used to infer from data the reduced operators of this physics-informed parametrization. For systems whose governing PDEs contain more general (non-polynomial) nonlinearities, the learned model performance can be improved through the use of *lifting* variable transformations, which expose polynomial structure in the PDE. The proposed method is demonstrated on two examples: a heat equation model problem that demonstrates the benefits of the function space formulation in terms of consistency" +"---\nabstract: 'Sound event detection is a core module for acoustic environmental analysis. Semi-supervised learning technique allows to largely scale up the dataset without increasing the annotation budget, and recently attracts lots of research attention. In this work, we study on two advanced semi-supervised learning techniques for sound event detection. Data augmentation is important for the success of recent deep learning systems. This work studies the audio-signal random augmentation method, which provides an augmentation strategy that can handle a large number of different audio transformations. In addition, consistency regularization is widely adopted in recent state-of-the-art semi-supervised learning methods, which exploits the unlabelled data by constraining the prediction of different transformations of one sample to be identical to the prediction of this sample. This work finds that, for semi-supervised sound event detection, consistency regularization is an effective strategy, especially the best performance is achieved when it is combined with the MeanTeacher model.'\naddress: |\n School of Engineering, Westlake University, Hangzhou, China\\\n Institute of Advanced Technology, Westlake Institute for Advanced Study, Hangzhou, China\nbibliography:\n- 'lixf\\_bib.bib'\ntitle: 'Semi-supervised Sound Event Detection using Random Augmentation and Consistency Regularization'\n---\n\nSemi-supervised learning, sound event detection, random augmentation, consistency regularization\n\nIntroduction {#sec:intro}\n============\n\nSound event" +"---\nabstract: 'Despite having impressive vision-language (VL) pretraining with BERT-based encoder for VL understanding, the pretraining of a universal encoder-decoder for both VL understanding and generation remains challenging. The difficulty originates from the inherently different peculiarities of the two disciplines, e.g., VL understanding tasks capitalize on the unrestricted message passing across modalities, while generation tasks only employ visual-to-textual message passing. In this paper, we start with a two-stream decoupled design of encoder-decoder structure, in which two decoupled cross-modal encoder and decoder are involved to separately perform each type of proxy tasks, for simultaneous VL understanding and generation pretraining. Moreover, for VL pretraining, the dominant way is to replace some input visual/word tokens with mask tokens and enforce the multi-modal encoder/decoder to reconstruct the original tokens, but no mask token is involved when fine-tuning on downstream tasks. As an alternative, we propose a primary scheduled sampling strategy that elegantly mitigates such discrepancy via pretraining encoder-decoder in a two-pass manner. Extensive experiments demonstrate the compelling generalizability of our pretrained encoder-decoder by fine-tuning on four VL understanding and generation downstream tasks. Source code is available at .'\nauthor:\n- 'Yehao Li ^1^, Yingwei Pan ^1^, Ting Yao ^1^, Jingwen Chen ^2^, Tao Mei" +"---\nabstract: 'In this work, we propose an approach for extrinsic sensor calibration from per-sensor ego-motion estimates. Our problem formulation is based on dual quaternions, enabling two different online capable solving approaches. We provide a certifiable globally optimal and a fast local approach along with a method to verify the globality of the local approach. Additionally, means for integrating previous knowledge, for example, a common ground plane for planar sensor motion, are described. Our algorithms are evaluated on simulated data and on a publicly available dataset containing RGB-D camera images. Further, our online calibration approach is tested on the KITTI odometry dataset, which provides data of a lidar and two stereo camera systems mounted on a vehicle. Our evaluation confirms the short run time, state-of-the-art accuracy, as well as online capability of our approach while retaining the global optimality of the solution at any time.'\nauthor:\n- 'Markus Horn$^{*}$, Thomas Wodtko$^{*}$, Michael Buchholz and Klaus Dietmayer[^1][^2] [^3][^4]'\nbibliography:\n- 'mybibfile.bib'\ntitle: 'Online Extrinsic Calibration based on Per-Sensor Ego-Motion Using Dual Quaternions'\n---\n\nCalibration and Identification, Sensor Networks, Optimization and Optimal Control\n\nIntroduction\n============\n\nat (current page.south) ;\n\nan evolving automation process, a growing number of sensors are embedded in robotic" +"---\nabstract: 'Despite the importance of Type Ia supernovae (SNe Ia) throughout astronomy, the precise progenitor systems and explosion mechanisms that drive SNe Ia are still unknown. An explosion scenario that has gained traction recently is the double detonation in which an accreted shell of He detonates and triggers a secondary detonation in the underlying white dwarf. Our research presents a number of high resolution, multi-dimensional, full star simulations of thin-He-shell, sub-Chandrasekhar-mass white dwarf progenitors that undergo a double detonation. This suite of thin-shell progenitors incorporates He shells that are thinner than those in previous multi-dimensional studies. We confirm the viability of the double detonation across a range of He shell parameter space as well as present bulk yields and ejecta profiles for each progenitor. The yields obtained are generally consistent with previous works and indicate the likelihood of producing observables that resemble SNe Ia. The dimensionality of our simulations allow us to examine features of the double detonation more closely, including the details of the off-center secondary ignition and asymmetric ejecta. We find considerable differences in the high-velocity extent of post-detonation products across different lines of sight. The data from this work will be used to generate predicted observables" +"---\nabstract: 'In this paper, we assess the capabilities of the Arbitrary Lagrangian-Eulerian method implemented in the open-source code TrioCFD to tackle down two fluid-structure interaction problems involving moving boundaries. To test the code, we first consider the bi-dimensional case of two coaxial cylinders moving in a viscous fluid. We show that the two fluid forces acting on the cylinders are in phase opposition, with amplitude and phase that only depend on the Stokes number, the dimensionless separation distance and the Keulegan-Carpenter number. Throughout a detailed parametric study, we show that the self (resp. cross) added mass and damping coefficients decrease (resp. increase) with the Stokes number and the separation distance. Our numerical results are in perfect agreement with the theoretical predictions of the literature, thereby validating the robustness of the ALE method implemented in TrioCFD. Then, we challenge the code by considering the case of a vibrating cylinder located in the central position of a square tube bundle. In parallel to the numerical investigations, we also present a new experimental setup for the measurement of the added coefficient, using the direct method introduced by Tanaka. The numerical predictions for the self-added coefficients are shown to be in very good" +"---\nabstract: 'As a platform, Twitter has been a significant public space for discussion related to the COVID-19 pandemic. Public social media platforms such as Twitter represent important sites of engagement regarding the pandemic and these data can be used by research teams for social, health, and other research. Understanding public opinion about COVID-19 and how information diffuses in social media is important for governments and research institutions. Twitter is a ubiquitous public platform and, as such, has tremendous utility for understanding public perceptions, behavior, and attitudes related to COVID-19. In this research, we present CML-COVID, a COVID-19 Twitter data set of 19,298,967 million tweets from 5,977,653 unique individuals and summarize some of the attributes of these data. These tweets were collected between March 2020 and July 2020 using the query terms \u2018coronavirus\u2019, \u2018covid\u2019 and \u2018mask\u2019 related to COVID-19. We use topic modeling, sentiment analysis, and descriptive statistics to describe the tweets related to COVID-19 we collected and the geographical location of tweets, where available. We provide information on how to access our tweet dataset (archived using twarc) at .'\nauthor:\n- |\n [![image](orcid.pdf)Hassan Dashtian](https://orcid.org/0000-0001-6400-1190)\\\n Computational Media Lab,\\\n School of Journalism and Media,\\\n Moody College of Communication,\\\n The University of" +"---\nabstract: 'The nitrogen-vacancy (NV) centre in diamond has emerged as a candidate to non-invasively hyperpolarise nuclear spins in molecular systems to improve the sensitivity of nuclear magnetic resonance (NMR) experiments. Several promising proof of principle experiments have demonstrated small-scale polarisation transfer from single NVs to hydrogen spins outside the diamond. However, the scaling up of these results to the use of a dense NV ensemble, which is a necessary prerequisite for achieving realistic NMR sensitivity enhancement, has not yet been demonstrated. In this work, we present evidence for a polarising interaction between a shallow NV ensemble and external nuclear targets over a micrometre scale, and characterise the challenges in achieving useful polarisation enhancement. In the most favourable example of the interaction with hydrogen in a solid state target, a maximum polarisation transfer rate of $\\approx 7500$ spins per second per NV is measured, averaged over an area containing order $10^6$ NVs. Reduced levels of polarisation efficiency are found for liquid state targets, where molecular diffusion limits the transfer. Through analysis via a theoretical model, we find that our results suggest implementation of this technique for NMR sensitivity enhancement is feasible following realistic diamond material improvements.'\nauthor:\n- 'A. J." +"---\nauthor:\n- |\n Song-Ju Kim${}^{\\dag}$${}^{\\ddag}$${}^{\\ast}$, Taiki Takahashi${}^{\\S}$, and Kazuo Sano${}^{\\dag}$${}^{\\P}$\\\n \\\n ${}^{\\dag}$ SOBIN Institute, Kawanishi, Japan\\\n https://sobin.org\\\n ${}^{\\ddag}$ Graduate School of Media and Governance, Keio University, Fujisawa, Japan\\\n ${}^{\\S}$ Department of Behavioral Science, Research and Education Center for Brain Sciences,\\\n Center for Experimental Research in Social Sciences, Hokkaido University, Sapporo, Japan\\\n ${}^{\\P}$ Department of Economics, Fukui Prefectural University, Fukui, Japan\\\n ${}^{\\ast}$Email: kim@sobin.org\ntitle: 'A Balance for Fairness: Fair Distribution Utilising Physics in Games of Characteristic Function Form'\n---\n\nKeyword:\\\nNatural Intelligence, Natural Computing, Fairness, Cooperative Game, Characteristic Function Form\n\nIntroduction\n============\n\nThe Buddha taught that the \u2019goodness\u2019 that embodies \u2019righteousness\u2019 stands in contrast to duties and to personal ties. In other words, the definition of \u2019good\u2019 is what will be in the interest of oneself, in the interest of others, and in the interest of those to be born in the future. Humans feel happy when they are needed within a community and when they play a role that benefits others and the whole community. This core aspect of human nature is often forgotten in actual social activities, seen as a \u2019beautiful thing\u2019 at a distance from the practical.\n\nIn modern society driven by neoliberalism, only the more primitive" +"---\nabstract: 'The growth rate of the number of scientific publications is constantly increasing, creating important challenges in the identification of valuable research and in various scholarly data management applications, in general. In this context, measures which can effectively quantify the scientific impact could be invaluable. In this work, we present BIP! DB, an open dataset that contains a variety of impact measures calculated for a large collection of more than $100$ million scientific publications from various disciplines.'\nauthor:\n- Thanasis Vergoulis\n- Ilias Kanellos\n- Claudio Atzori\n- Andrea Mannocci\n- Serafeim Chatzopoulos\n- Sandro La Bruzzo\n- Natalia Manola\n- Paolo Manghi\nbibliography:\n- 'main.bib'\ntitle: 'BIP! DB: A Dataset of Impact Measures for Scientific Publications'\n---\n\nIntroduction {#sec:intro}\n============\n\nThe growth rate of the number of published scientific articles is constantly increasing\u00a0[@growth2]. At the same time, studies suggest that, among the vast number of published works, many are of low impact or may even contain research of questionable quality\u00a0[@ioannidis2005most]. Consequently, identifying the most valuable publications for any given research topic has become extremely tedious and time consuming.\n\nQuantifying the impact of scientific publications could facilitate this and other related tasks, which make up the daily" +"---\nabstract: 'We consider a linear symmetric and elliptic PDE and a linear goal functional. We design and analyze a goal-oriented adaptive finite element method, which steers the adaptive mesh-refinement as well as the approximate solution of the arising linear systems by means of a contractive iterative solver like the optimally preconditioned conjugate gradient method or geometric multigrid. We prove linear convergence of the proposed adaptive algorithm with optimal algebraic rates. Unlike prior work, we do not only consider rates with respect to the number of degrees of freedom but even prove optimal complexity, i.e., optimal convergence rates with respect to the total computational cost.'\naddress:\n- 'Universit\u00e9 de Pau et des Pays de l\u2019Adour, IPRA-LMAP, Avenue de l\u2019Universit\u00e9 BP 1155, 64013 PAU Cedex, France'\n- 'Korteweg-de Vries (KdV) Institute for Mathematics, University of Amsterdam, P.O. Box 94248, 1090 GE Amsterdam, The Netherlands.'\n- 'TU Wien, Institute for Analysis and Scientific Computing, Wiedner Hauptstr. 8-10/E101/4, 1040 Vienna, Austria'\n- 'TU Wien, Institute for Analysis and Scientific Computing, Wiedner Hauptstr. 8-10/E101/4, 1040 Vienna, Austria'\nauthor:\n- Roland Becker\n- Gregor Gantner\n- Michael Innerberger\n- Dirk Praetorius\nbibliography:\n- 'literature.bib'\ntitle: 'Goal-oriented adaptive finite element methods with optimal computational complexity'\n---" +"---\nabstract: 'Large-scale magnetic field is believed to play a key role in launching and collimating jets/outflows. It was found that advection of external field by a geometrically thin disk is rather inefficient, while the external weak field may be dragged inwards by fast radially moving tenuous or/and hot gas above the thin disk. We investigate the field advection in a thin (cold) accretion disk covered with hot corona, in which turbulence is responsible for the angular momentum transfer of the gas in the disk and corona. The radial velocity of the gas in the corona is significantly higher than that in the thin disk. Our calculations show that the external magnetic flux is e\ufb00iciently transported inwards by the corona, and the field line is strongly inclined towards the disk surface, which help launching outflows. The field configurations are consistent with those observed in the numerical simulations. The strength of the field is substantially enhanced in the inner region of the disk (usually several orders of magnitude higher than the external field strength), which is able to drive a fraction of gas in the corona into outflows. This mechanism may be useful in explaining the observational features in X-ray binaries" +"---\nabstract: 'Nowadays, with the vigorous expansion and development of gaming video streaming techniques and services, the expectation of users, especially the mobile phone users, for higher quality of experience is also growing swiftly. As most of the existing research focuses on traditional video streaming, there is a clear lack of both subjective study and objective quality models that are tailored for quality assessment of mobile gaming content. To this end, in this study, we first present a brand new Tencent Gaming Video dataset containing 1293 mobile gaming sequences encoded with three different codecs. Second, we propose an objective quality framework, namely Efficient hard-RAnk Quality Estimator (ERAQUE), that is equipped with (1) a novel hard pairwise ranking loss, which forces the model to put more emphasis on differentiating similar pairs; (2) an adapted model distillation strategy, which could be utilized to compress the proposed model efficiently without causing significant performance drop. Extensive experiments demonstrate the efficiency and robustness of our model.'\naddress: '[[^1^]{} LS2N, \u00a0University of Nantes]{} \u00a0\u00a0 [[^2^]{}Turing Lab, \u00a0 Tencent ]{}'\ntitle: Subjective and Objective Quality Assessment of Mobile Gaming Video\n---\n\nSubjective quality assessment, objective quality metric, gaming video, model distillation\n\nIntroduction {#sec:intro}\n============\n\nGaming video streaming is composed" +"---\nabstract: 'Three-boson Efimov physics is well known in the bound-state regime, but far less in the three-particle continuum at negative two-particle scattering length where Efimov states evolve into resonances. They are studied solving rigorous three-particle scattering equations for transition operators in the momentum space. The dependence of the three-boson resonance energy and width on the two-boson scattering length is studied with several force models. The universal limit is determined numerically considering highly excited states; simple parametrizations for the resonance energy and width in terms of the scattering length are established. Decreasing the attraction, the resonances rise not much above the threshold but broaden rapidly and become physically unobservable, evolving into subthreshold resonances. Finite-range effects are studied and related to those in the bound-state regime.'\nauthor:\n- 'A.\u00a0Deltuva'\ntitle: 'Energies and widths of Efimov states in the three-boson continuum'\n---\n\nIntroduction \\[sec:intro\\]\n==========================\n\nFifty years ago V. Efimov studied theoretically the three-body system with large two-body scattering lengths [@efimov:plb] and laid the foundations of the universal physics, also called Efimov physics. Since then a large number of theoretical and experimental works with applications to nuclear, cold atom, and molecular physics has been performed, and the properties of universal few-body" +"---\nabstract: 'Given input-output pairs of an elliptic partial differential equation (PDE) in three dimensions, we derive the first theoretically-rigorous scheme for learning the associated Green\u2019s function $G$. By exploiting the hierarchical low-rank structure of $G$, we show that one can construct an approximant to $G$ that converges almost surely and achieves a relative error of $\\mathcal{O}(\\Gamma_\\epsilon^{-1/2}\\log^3(1/\\epsilon)\\epsilon)$ using at most $\\mathcal{O}(\\epsilon^{-6}\\log^4(1/\\epsilon))$ input-output training pairs with high probability, for any $0<\\epsilon<1$. The quantity $0<\\Gamma_\\epsilon\\leq 1$ characterizes the quality of the training dataset. Along the way, we extend the randomized singular value decomposition algorithm for learning matrices to Hilbert\u2013Schmidt operators and characterize the quality of covariance kernels for PDE learning.'\nauthor:\n- Nicolas Boull\u00e9\n- Alex Townsend\nbibliography:\n- 'references.bib'\ndate: 'Received: 1 February 2021 / Revised: 18 November 2021 / Accepted: 20 November 2021'\ntitle: 'Learning elliptic partial differential equations with randomized linear algebra [^1] '\n---\n\nIntroduction\n============\n\nCan one learn a differential operator from pairs of solutions and righthand sides? If so, how many pairs are required? These two questions have received significant research attention\u00a0[@feliu2020meta; @li2020fourier; @long2018pde; @pang2019neural]. From data, one hopes to eventually learn physical laws of nature or conservation laws that elude scientists in the biological" +"---\nabstract: |\n Recent work has established clear links between the generalization performance of trained neural networks and the geometry of their loss landscape near the local minima to which they converge. This suggests that qualitative and quantitative examination of the loss landscape geometry could yield insights about neural network generalization performance during training. To this end, researchers have proposed visualizing the loss landscape through the use of simple dimensionality reduction techniques. However, such visualization methods have been limited by their linear nature and only capture features in one or two dimensions, thus restricting sampling of the loss landscape to lines or planes. Here, we expand and improve upon these in three ways. First, we present a novel \u201cjump and retrain\u201d procedure for sampling relevant portions of the loss landscape. We show that the resulting sampled data holds more meaningful information about the network\u2019s ability to generalize. Next, we show that non-linear dimensionality reduction of the jump and retrain trajectories via PHATE, a trajectory and manifold-preserving method, allows us to visualize differences between networks that are generalizing well vs poorly. Finally, we combine PHATE trajectories with a computational homology characterization to quantify trajectory differences.\n\n [^1]\n\n [^2]\nauthor:\n- Stefan Horoi" +"---\nabstract: |\n Recent applications employ publish/subscribe (Pub/Sub) systems so that publishers can easily receive attentions of customers and subscribers can monitor useful information generated by publishers. Due to the prevalence of smart devices and social networking services, a large number of objects that contain both spatial and keyword information have been generated continuously, and the number of subscribers also continues to increase. This poses a challenge to Pub/Sub systems: they need to continuously extract useful information from massive objects for each subscriber in real time.\n\n In this paper, we address the problem of $k$ nearest neighbor monitoring on a spatial-keyword data stream for a large number of subscriptions. To scale well to massive objects and subscriptions, we propose a distributed solution, namely D$k$M-SKS. Given $m$ workers, D$k$M-SKS divides a set of subscriptions into $m$ disjoint subsets based on a cost model so that each worker has almost the same $k$NN-update cost, to maintain load balancing. D$k$M-SKS allows an arbitrary approach to updating $k$NN of each subscription, so with a suitable in-memory index, D$k$M-SKS can accelerate update efficiency by pruning irrelevant subscriptions for a given new object. We conduct experiments on real datasets, and the results demonstrate the efficiency and" +"---\nabstract: 'Quadrotors can achieve aggressive flight by tracking complex maneuvers and rapidly changing directions. Planning for aggressive flight with trajectory optimization could be incredibly fast, even in higher dimensions, and can account for dynamics of the quadrotor, however, only provides a locally optimal solution. On the other hand, planning with discrete graph search can handle non-convex spaces to guarantee optimality but suffers from exponential complexity with the dimension of search. We introduce a framework for aggressive quadrotor trajectory generation with global reasoning capabilities that combines the best of trajectory optimization and discrete graph search. Specifically, we develop a novel algorithmic framework that *interleaves* these two methods to complement each other and generate trajectories with provable guarantees on completeness up to discretization. We demonstrate and quantitatively analyze the performance of our algorithm in challenging simulation environments with narrow gaps that create severe attitude constraints and push the dynamic capabilities of the quadrotor. Experiments show the benefits of the proposed algorithmic framework over standalone trajectory optimization and graph search-based planning techniques for aggressive quadrotor flight.'\nauthor:\n- 'Ramkumar Natarajan$^{1}$, Howie Choset$^{1}$ and Maxim Likhachev$^{1}$ [^1][^2] [^3] [^4]'\nbibliography:\n- 'IEEEabrv.bib'\n- 'mybibfile.bib'\ntitle: |\n **Interleaving Graph Search and Trajectory Optimization\\\n for" +"---\nabstract: |\n Complex networks are pervasive in the real world, capturing dyadic interactions between pairs of vertices, and a large corpus has emerged on their mining and modeling. However, many phenomena are comprised of polyadic interactions between more than two vertices. Such complex hypergraphs range from emails among groups of individuals, scholarly collaboration, or joint interactions of proteins in living cells. Complex hypergraphs and their models form an emergent topic, requiring new models and techniques.\n\n A key generative principle within social and other complex networks is transitivity, where friends of friends are more likely friends. The previously proposed Iterated Local Transitivity (ILT) model incorporated transitivity as an evolutionary mechanism. The ILT model provably satisfies many observed properties of social networks, such as densification, low average distances, and high clustering coefficients.\n\n We propose a new, generative model for complex hypergraphs based on transitivity, called the Iterated Local Transitivity Hypergraph (or ILTH) model. In ILTH, we iteratively apply the principle of transitivity to form new hypergraphs. The resulting model generates hypergraphs simulating properties observed in real-world complex hypergraphs, such as densification and low average distances. We consider properties unique to hypergraphs not captured by their 2-section. We show that certain motifs," +"---\nabstract: 'Network (graph) data analysis is a popular research topic in statistics and machine learning. In application, one is frequently confronted with graph two-sample hypothesis testing where the goal is to test the difference between two graph populations. Several statistical tests have been devised for this purpose in the context of binary graphs. However, many of the practical networks are weighted and existing procedures can\u2019t be directly applied to weighted graphs. In this paper, we study the weighted graph two-sample hypothesis testing problem and propose a practical test statistic. We prove that the proposed test statistic converges in distribution to the standard normal distribution under the null hypothesis and analyze its power theoretically. The simulation study shows that the proposed test has satisfactory performance and it substantially outperforms the existing counterpart in the binary graph case. A real data application is provided to illustrate the method.'\naddress: 'Department of Statistics, North Dakota State University, Fargo, ND,USA, 58102.'\nauthor:\n- Mingao Yuan\n- Qian Wen\ntitle: 'A Practical Two-Sample Test for Weighted Random Graphs'\n---\n\ntwo-sample hypothesis test,random graph ,weighted graph\n\nIntroduction {#S:1}\n============\n\nA graph or network $\\mathcal{G}=(V,E)$ is a mathematical model that consists of a set $V$ of" +"---\nauthor:\n- 'Ilka Brunner,'\n- 'Fabian Klos,'\n- Daniel Roggenkamp\nbibliography:\n- 'references.bib'\ntitle: Phase transitions in GLSMs and defects\n---\n\nIntroduction\n============\n\nThe topic of this paper are two-dimensional gauged linear sigma models with $U(1)$ gauge groups[^1]. These are 2d $N=(2,2)$ supersymmetric gauge theories coupled to chiral superfields carrying possibly different charges under the $U(1)$ gauge group, such that the respective superpotentials $W$ are $U(1)$ invariant.\n\nAs is well known, gauged linear sigma models exhibit different phases for different ranges of the Fayet-Iliopoulos parameter $r$ associated to the $U(1)$ gauge group [@Witten:1993yc]. For non-anomalous gauged linear sigma models, where axial and vector $R$-symmetries are preserved at the quantum level, the RG flow drives the GLSM to a (K\u00e4hler) moduli space of superconformal field theories parametrized by the complexified Fayet-Iliopoulos parameter $t$. The phases correspond to different domains of this moduli space. In contrast, in the anomalous case, the FI parameter is a running coupling constant, and the different phases correspond to fixed points under the RG flow.\n\nThe phases typically exhibit gauge symmetry breaking. For instance, in geometric phases, in which the theory can be effectively described by a non-linear sigma model, the gauge group is typically completely" +"---\nabstract: 'Generative Adversarial Networks (GANs) are powerful generative models that achieved strong results, mainly in the image domain. However, the training of GANs is not trivial, presenting some challenges tackled by different strategies. Evolutionary algorithms, such as COEGAN, were recently proposed as a solution to improve the GAN training, overcoming common problems that affect the model, such as vanishing gradient and mode collapse. In this work, we propose an evaluation method based on t-distributed Stochastic Neighbour Embedding (t-SNE) to assess the progress of GANs and visualize the distribution learned by generators in training. We propose the use of the feature space extracted from trained discriminators to evaluate samples produced by generators and from the input dataset. A metric based on the resulting t-SNE maps and the Jaccard index is proposed to represent the model quality. Experiments were conducted to assess the progress of GANs when trained using COEGAN. The results show both by visual inspection and metrics that the Evolutionary Algorithm gradually improves discriminators and generators through generations, avoiding problems such as mode collapse.'\nauthor:\n- Victor Costa\n- Nuno Louren\u00e7o\n- Jo\u00e3o Correia\n- Penousal Machado\nbibliography:\n- 'costa.bib'\ntitle: 'Demonstrating the Evolution of GANs through t-SNE'\n---" +"---\nabstract: 'We present a set of paraxial light beams with cylindrical symmetry, smooth and localized transversal profile carrying finite power, that develop intensity singularities when they are focused in a linear medium, such as vacuum. They include beams with orbital angular momentum and with radial polarization, in which case they develop punctual phase and polarization singularities surrounded by infinitely bright rings, along with singular longitudinal fields. In practice, these effects are manifested in focal intensities and spot sizes, vortex bright ring intensities and radii, and strengths of the longitudinal field, that strongly change with the lens aperture radius. Continuous control of these focal properties is thus exercised without changing the light incident on the lens, with substantially the same collected power, and while maintaining paraxial focusing conditions. As solutions of the Schr\u00f6dinger equation, these exploding beams have analogues in other areas of physics where this equation is the fundamental dynamical model.'\nauthor:\n- 'Miguel A. Porras'\ntitle: 'Exploding paraxial beams, vortex beams, and cylindrical beams of light with finite power in linear media, and their enhanced longitudinal field'\n---\n\nIntroduction\n============\n\nInspired by what happens to some wave functions in quantum mechanics [@PERES], Aiello has recently introduced a class" +"---\nabstract: 'We proposed a real time Total-Variation denosing method with an automatic choice of hyper-parameter $\\lambda$, and the good performance of this method provides a large application field. In this article, we adapt the developed method to the non stationary signal in using the sliding window, and propose a noise variance monitoring method. The simulated results show that our proposition follows well the variation of noise variance.'\nauthor:\n- Zhanhao\u00a0Liu\n- Marion\u00a0Perrodin\n- Thomas\u00a0Chambrion\n- 'Radu\u00a0S.Stoica'\nbibliography:\n- 'ecc2021.bib'\ntitle: Windowed total variation denoising and noise variance monitoring\n---\n\nIntroduction\n============\n\nThe signal $y = (y_1, \\cdots, y_n) \\in \\mathbb{R}^n$ collected by the sensor can be modeled as $y = u + \\epsilon$: a random noise $\\epsilon$ is added into the useful physical quantity $u$ with $\\mathbb{E}(\\epsilon) = 0$ and $\\mathbb{V}(\\epsilon) = \\sigma^2$.\n\nWe aim to recover the unknown vector $u=(u_1, \\cdots, u_n)\\in \\mathbb{R}^n$ from the noisy sample vector $y = (y_1, \\cdots, y_n)$ with $y_i$ the sample at time $t_i$ by minimizing the Total Variation (TV) restoration functional: $$\\begin{aligned}\n F(u, y, \\tau, \\lambda) = \\sum_{i = 1}^n \\tau_i(y_i-u_i)^2 + \\lambda \\sum_{i=1}^{n-1}|u_i-u_{i-1}|\n \\label{equ:tv}\\end{aligned}$$ with the sampling period vector $\\tau = (\\tau_1, \\cdots, \\tau_n)$ where $\\tau_i" +"---\nabstract: 'We show homological mirror symmetry results relating coherent analytic sheaves on some complex elliptic surfaces and objects of certain Fukaya categories. We first define the notion of a non-algebraic Landau-Ginzburg model on ${\\mathbb R}\\times \\left(S^1\\right)^3$ and its associated Fukaya category, and show that non-K\u00e4hler surfaces obtained by performing two logarithmic transformations to the product of the projective plane and an elliptic curve have non-algebraic Landau-Ginzburg models as their mirror spaces; this class of surface includes the classical Hopf surface $S^1 \\times S^3$ and other elliptic primary and secondary Hopf surfaces. We also define localization maps from the Fukaya categories associated to the Landau-Ginzburg models to partially wrapped and fully wrapped categories. We show mirror symmetry results that relate the partially wrapped and fully wrapped categories to spaces of coherent analytic sheaves on open submanifolds of the compact complex surfaces in question, and we use these results to sketch a proof of a full HMS result.'\nauthor:\n- Abigail Ward\nbibliography:\n- 'citations.bib'\ntitle: Homological mirror symmetry for elliptic Hopf Surfaces\n---\n\nIntroduction\n============\n\nFor $q \\in {\\mathbb C}^*$ with $|q| < 1$, there is a free action of ${\\mathbb Z}$ on ${\\mathbb C}^n \\setminus \\{0\\}$ given by scaling" +"---\nabstract: |\n The generalized Tur\u00e1n problem ${\\text{ex}}(n,T,F)$ is to determine the maximal number of copies of a graph $T$ that can exist in an $F$-free graph on $n$ vertices. Recently, Gerbner and Palmer noted that the solution to the generalized Tur\u00e1n problem is often the original Tur\u00e1n graph. They gave the name \u201c$F$-Tur\u00e1n-good\u201d to graphs $T$ for which, for large enough $n$, the solution to the generalized Tur\u00e1n problem is realized by a Tur\u00e1n graph. They prove that the path graph on two edges, $P_2$, is $K_{r+1}$-Tur\u00e1n-good for all $r\n \\ge 3$, but they conjecture that the same result should hold for all $P_\\ell$. In this paper, using arguments based in flag algebras, we prove that the path on three edges, $P_3$, is also $K_{r+1}$-Tur\u00e1n-good for all $r \\ge 3$.\nauthor:\n- |\n Kyle Murphy\\\n \\\n- |\n JD Nir\\\n \\\nbibliography:\n- 'P3.bib'\ntitle: 'Paths of Length Three are $K_{r+1}$-Tur\u00e1n-Good'\n---\n\nIntroduction\n============\n\nOne of extremal graph theory\u2019s most celebrated results was introduced in [@Turan] by Tur\u00e1n who asked how many edges a (simple) graph on $n$ vertices can contain if it has no clique containing $r+1$ vertices. Tur\u00e1n\u2019s solution, which we denote ${\\text{ex}}(n, K_{r+1})$, is asymptotically $(1-\\frac{1}{r})\\binom{n}{2}$." +"---\nabstract: 'For certain types of quadratic forms lying in the $n$-th power of the fundamental ideal, we compute upper bounds and where possible exact values for the minimal number of general $n$-fold Pfister forms, that are needed to write the Witt class of that given form as the sum of the Witt classes of those $n$-fold Pfister forms. We restrict ourselves mostly to the case of so called rigid fields, i.e. fields in which binary anisotropic forms represent at most 2 square classes.'\nbibliography:\n- 'literatur.bib'\n---\n\n**Pfister Numbers over Rigid Fields**\n\nKeywords: Quadratic form; Pfister number,\n\n[Introduction]{}\n\nThroughout this paper, let $F$ be a field of characteristic different from 2. By a quadratic form or just form for short, we will always mean a finite dimensional non-degenerate quadratic form over $F$. We will denote isometry of two forms $\\varphi_1,\\varphi_2$ by $\\varphi_1\\cong\\varphi_2$. In abuse of notation, we will denote the Witt class of a quadratic form $\\varphi$ again by $\\varphi$. An $n$*-fold Pfister form* for some $n\\in\\operatorname{\\mathbb{N}}$ is a form of the shape ${\\langle\\!\\langle a_1,\\ldots, a_n\\rangle\\!\\rangle}:={\\langle 1, -a_1\\rangle}\\otimes{\\langle \\ldots\\rangle}\\otimes{\\langle 1,-a_n\\rangle}$ with $a_1,\\ldots, a_n\\in F^\\ast$. The set of $n$-fold Pfister forms over $F$ is denoted by $P_nF$, the set" +"---\nabstract: 'We introduce a performance-optimized method to simulate localization problems on bipartite tight-binding lattices. It combines an exact renormalization group step to reduce the sparseness of the original problem with the recursive Green\u2019s function method. We apply this framework to investigate the critical behavior of the integer quantum Hall transition of a tight-binding Hamiltonian defined on a simple square lattice. In addition, we employ an improved scaling analysis that includes two irrelevant exponents to characterize the shift of the critical energy as well as the corrections to the dimensionless Lyapunov exponent. We compare our findings with the results of a conventional implementation of the recursive Green\u2019s function method, and we put them into broader perspective in view of recent development in this field.'\naddress:\n- 'Institute of Theoretical Physics, University of Regensburg, 93053 Regensburg, Germany'\n- 'Department of Physics, Missouri University of Science and Technology, Rolla, Missouri 65409, USA'\nauthor:\n- Martin Puschmann\n- Thomas Vojta\ntitle: 'Green\u2019s functions on a renormalized lattice: An improved method for the integer quantum Hall transition'\n---\n\nquantum Hall effect,Anderson localization,critical exponents\n\nIntroduction\n============\n\nThe integer quantum Hall (IQH) transition is a paradigmatic quantum phase transition in the realm of Anderson localization\u00a0[@EveM08]." +"---\nabstract: 'The problem of Bayesian filtering and smoothing in nonlinear models with additive noise is an active area of research. Classical Taylor series as well as more recent sigma-point based methods are two well-known strategies to deal with these problems. However, these methods are inherently sequential and do not in their standard formulation allow for parallelization in the time domain. In this paper, we present a set of parallel formulas that replace the existing sequential ones in order to achieve lower time (span) complexity. Our experimental results done with a graphics processing unit (GPU) illustrate the efficiency of the proposed methods over their sequential counterparts.'\naddress: 'Department of Electrical Engineering and Automation, Aalto University, Finland'\nbibliography:\n- 'strings.bib'\ntitle: 'Parallel Iterated Extended and Sigma-Point Kalman Smoothers'\n---\n\nparallel computing, nonlinear estimation, iterated extended Kalman smoother, sigma-point smoother\n\nIntroduction {#sec:intro}\n============\n\nIn recent years, the rapid advancements in hardware technologies such as graphics processing units (GPUs) and tensor processing units (TPUs) allow compute-intensive workloads to be offloaded from the central processing units (CPUs) by introducing parallelism [@rauber2013parallel; @owens2008gpu; @jouppi2017datacenter]. There is a wide variety of areas that can benefit from parallelization [@cormen2009introduction], one of which is state estimation.\n\nState estimation" +"---\nabstract: 'It is well recognized that population heterogeneity plays an important role in the spread of epidemics. While individual variations in social activity are often assumed to be persistent, i.e. constant in time, here we discuss the consequences of dynamic heterogeneity. By integrating the stochastic dynamics of social activity into traditional epidemiological models we demonstrate the emergence of a new long timescale governing the epidemic in broad agreement with empirical data. Our model captures multiple features of real-life epidemics such as COVID-19, including prolonged plateaus and multiple waves, which are transiently suppressed due to the dynamic nature of social activity. The existence of the long timescale due to the interplay between epidemic and social dynamics provides a unifying picture of how a fast-paced epidemic typically will transition to the endemic state.'\nauthor:\n- 'Alexei V. Tkachenko$^{2\\dagger}$, Sergei Maslov$^{1, 4,5\\dagger}$, Tong Wang$^{2,5}$, Ahmed Elbanna$^{3}$, George N.\u00a0Wong$^{1}$, and Nigel Goldenfeld$^{1,5}$'\nbibliography:\n- 'main.bib'\ntitle: 'Stochastic social behavior coupled to COVID-19 dynamics leads to waves, plateaus and an endemic state '\n---\n\nThe COVID-19 pandemic has underscored the prominent role played by population heterogeneity in epidemics. On one hand, the observed transmission of infection is characterized by the phenomenon of super-spreading," +"---\nabstract: 'In this article we study a theory of support varieties over a skew complete intersection $R$, i.e. a skew polynomial ring modulo an ideal generated by a sequence of regular normal elements. We compute the derived braided Hochschild cohomology of $R$ relative to the skew polynomial ring and show its action on ${\\operatorname{Ext}}_R(M,N)$ is noetherian for finitely generated $R$-modules $M$ and $N$ respecting the braiding of $R$. When the parameters defining the skew polynomial ring are roots of unity we use this action to define a support theory. In this setting applications include a proof of the Generalized Auslander-Reiten Conjecture and that $R$ possesses symmetric complexity.'\naddress:\n- 'Department of Mathematics, Texas Tech University, Lubbock, TX 79409, U.S.A.'\n- 'Department of Mathematics & Statistics, Wake Forest University, Winstom-Salem, NC 27109, U.S.A.'\n- 'Department of Mathematics, University of Utah, Salt Lake City, UT 84112, U.S.A.'\nauthor:\n- Luigi Ferraro\n- 'W.\u00a0Frank Moore'\n- Josh Pollitz\nbibliography:\n- 'biblio.bib'\ntitle: Support varieties over skew complete intersections via derived braided Hochschild cohomology\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nThe use of the cohomological spectrum has had a tremendous impact on modular representation theory; two notable works are those of Carlson [@carlson]" +"---\nabstract: 'We explore the chaotic dynamics and complexity of a neuro-system with respect to variable synaptic weights in both noise free and noisy conditions. The chaotic dynamics of the system is investigated by bifurcation analysis and $0-1$ test. A multiscale complexity of the system is proposed based on the notion of recurrence plot density entropy. Numerical results support the proposed analysis. Impact of music on the aforesaid neuro-system has also been studied. The analysis shows that inclusion of white noise even with a minimal strength makes the neuro dynamics more complex, where as music signal keeps the dynamics almost similar to that of the original system. This is properly interpreted by the proposed multiscale complexity measure.'\naddress:\n- 'Basic Sciences and Humanities Department, Calcutta Institute of Engineering and Management, Kolkata, India'\n- 'Department of Mathematics, Sivanath Sastri College, Kolkata, India'\nauthor:\n- 'Sanjay K. Palit'\n- Sayan Mukherjee\ntitle: A study on dynamics and multiscale complexity of a neuro system\n---\n\nNeuro dynamics ,Power noise ,$0-1$ test ,Recurrence plot ,Music signal\n\nIntroduction {#intro}\n============\n\nAn artificial neural network (ANN) is a mathematical model analogical with the biological structure of a neuron, which consists of a cellular body with a" +"---\nabstract: 'Supervised deep learning performance is heavily tied to the availability of high-quality labels for training. Neural networks can gradually overfit corrupted labels if directly trained on noisy datasets, leading to severe performance degradation at test time. In this paper, we propose a novel deep learning framework, namely Co-Seg, to collaboratively train segmentation networks on datasets which include low-quality noisy labels. Our approach first trains two networks simultaneously to sift through all samples and obtain a subset with reliable labels. Then, an efficient yet easily-implemented label correction strategy is applied to enrich the reliable subset. Finally, using the updated dataset, we retrain the segmentation network to finalize its parameters. Experiments in two noisy labels scenarios demonstrate that our proposed model can achieve results comparable to those obtained from supervised learning trained on the noise-free labels. In addition, our framework can be easily implemented in any segmentation algorithm to increase its robustness to noisy labels.'\naddress: |\n $^1$ Department of Electrical Engineering, Columbia University, New York, NY, USA\\\n $^2$ Department of Industrial Engineering and Operations Research, Columbia University, New York, NY, USA\\\n $^3$ Department of Biomedical Engineering, Columbia University, New York, NY, USA\\\n $^4$ NIHR Imperial Biomedical Research Centre, ITMAT" +"---\nabstract: 'This paper describes a study designed to investigate the current and emergent impacts of Covid-19 and Brexit on UK horticultural businesses. Various characteristics of UK horticultural production, notably labour reliance and import dependence, make it an important sector for policymakers concerned to understand the effects of these disruptive events as we move from 2020 into 2021. The study design prioritised timeliness, using a rapid survey to gather information from a relatively small ($n=19$) but indicative group of producers. The main novelty of the results is to suggest that a very substantial majority of producers either plan to scale back production in 2021 (47%) or have been unable to make plans for 2021 because of uncertainty (37%). The results also add to broader evidence that the sector has experienced profound labour supply challenges, with implications for labour cost and quality. The study discusses the implications of these insights from producers in terms of productivity and automation, as well as in terms of broader economic implications. Although automation is generally recognised as the long-term future for the industry (89%), it appeared in the study as the second most referred short-term option (32%) only after changes to labour schemes and policies" +"---\nabstract: 'A mixed-integer linear programming (MILP) formulation is presented for parameter estimation of the Potts model. Two algorithms are developed; the first method estimates the parameters such that the set of ground states replicate the user-prescribed data set; the second method allows the user to prescribe the ground states multiplicity. In both instances, the optimization process ensures that the bandgap is maximized. Consequently, the model parameter efficiently describes the user data for a broad range of temperatures. This is useful in the development of energy-based graph models to be simulated on Quantum annealing hardware where the exact simulation temperature is unknown. Computationally, the memory requirement in this method grows exponentially with the graph size. Therefore, this method can only be practically applied to small graphs. Such applications include learning of small generative classifiers and spin-lattice model with energy described by Ising hamiltonian. Learning large data sets poses no extra cost to this method; however, applications involving the learning of high dimensional data are out of scope.'\nauthor:\n- |\n Siddhartha Srivastava[^1], Veera Sundararaghavan[^2]\\\n Department of Aerospace Engineering\\\n University of Michigan\\\n Ann Arbor, MI 48109\\\nbibliography:\n- 'references.bib'\ntitle: 'Bandgap optimization in combinatorial graphs with tailored ground states: Application in" +"---\nabstract: |\n Associated to a finite measure on the real line with finite moments are recurrence coefficients in a three-term formula for orthogonal polynomials with respect to this measure. These recurrence coefficients are frequently inputs to modern computational tools that facilitate evaluation and manipulation of polynomials with respect to the measure, and such tasks are foundational in numerical approximation and quadrature. Although the recurrence coefficients for classical measures are known explicitly, those for nonclassical measures must typically be numerically computed. We survey and review existing approaches for computing these recurrence coefficients for univariate orthogonal polynomial families and propose a novel \u201cpredictor-corrector\" algorithm for a general class of continuous measures. We combine the predictor-corrector scheme with a stabilized Lanczos procedure for a new hybrid algorithm that computes recurrence coefficients for a fairly wide class of measures that can have both continuous and discrete parts. We evaluate the new algorithms against existing methods in terms of accuracy and efficiency.\n\n **Keywords.** Orthogonal polynomials; Recurrence coefficients; General measures; Adaptive quadrature; Lanczos\naddress: 'Department of Mathematics, and Scientific Computing and Imaging (SCI) Institute, the University of Utah'\nauthor:\n- Zexin Liu\n- Akil Narayan\nbibliography:\n- 'references.bib'\ntitle: On the computation of recurrence coefficients" +"---\nabstract: 'We make the split of the integral fractional Laplacian as $(-\\Delta)^s u=(-\\Delta)(-\\Delta)^{s-1}u$, where $s\\in(0,\\frac{1}{2})\\cup(\\frac{1}{2},1)$. Based on this splitting, we respectively discretize the one- and two-dimensional integral fractional Laplacian with the inhomogeneous Dirichlet boundary condition and give the corresponding truncation errors with the help of the interpolation estimate. Moreover, the suitable corrections are proposed to guarantee the convergence in solving the inhomogeneous fractional Dirichlet problem and an $\\mathcal{O}(h^{1+\\alpha-2s})$ convergence rate is obtained when the solution $u\\in C^{1,\\alpha}(\\bar{\\Omega}^{\\delta}_{n})$, where $n$ is the dimension of the space, $\\alpha\\in(\\max(0,2s-1),1]$, $\\delta$ is a fixed positive constant, and $h$ denotes mesh size. Finally, the performed numerical experiments confirm the theoretical results.'\nauthor:\n- Jing Sun\n- Weihua Deng\n- Daxin Nie\nbibliography:\n- 'cas-refs.bib'\ntitle: Finite difference method for inhomogeneous fractional Dirichlet problem\n---\n\none- and two-dimensional integral fractional Laplacian,Lagrange interpolation ,operator splitting ,finite difference,the inhomogeneous fractional Dirichlet problem ,error estimates\n\nIntroduction\n============\n\nFractional Laplacian is of wide interest to both pure and applied mathematicians, and also has extensive applications in physical and engineering community [@Lischke2020; @Deng.2018BPftFaTFO]. Based on the splitting of the integral fractional Laplacian, we provide the finite difference approximations for the one- and two-dimensional cases of the operator. Then the approximations" +"---\nabstract: 'We use Ichino\u2019s period formula combined with a relative trace formula to obtain exact formulas for the central values of triple product $L$-functions $L(3k-1,f\\times g\\times h)$, averaged over Hecke-normalized cusp newforms of weight $2k$ on $\\Gamma_0(N)$, with $f$ and $g$ varying and $h$ fixed. We also present some applications of the average formulas to the nonvanishing problem, giving a lower bound on the number of nonvanishing central $L$-values when one of the forms is fixed.'\naddress: |\n Data Science Institute\\\n Shandong University\\\n Jinan\\\n China\nauthor:\n- Bin Guan\nbibliography:\n- 'Averages\\_Nonvanishing\\_Triple\\_L.bib'\ntitle: ' Averages and Nonvanishing of Central Values of Triple Product L-Functions '\n---\n\n[^1]\n\nIntroduction\n============\n\nMain results\n------------\n\nThe aim of this paper is to establish exact average formulas of central values of triple product $L$-functions associated to three normalized cusp newforms, while one of the three forms is fixed. We also give some applications of the average formulas to the nonvanishing problems.\n\nLet $N,k$ be positive integers and $N$ be square-free. Let ${\\mathcal{F}}_{2k}(N)$ denote the set of normalized cusp newforms of weight $2k$ on $\\Gamma_0(N)$ which are eigenforms of Hecke operators. Normalizing $f(z)=\\sum_{n\\geq 1} a_n(f)e^{2\\pi inz},g,h\\in{\\mathcal{F}}_{2k}(N)$ such that $a_1(f)=a_1(g)=a_1(h)=1$, we can define the triple" +"---\nabstract: 'Rapid rise in income inequality in India is a serious concern. While the emphasis is on inclusive growth, it seems difficult to tackle the problem without looking at the intricacies of the problem. Social mobility is one such important tool which helps in reaching the cause of the problem and focuses on bringing long term equality in the country. The purpose of this study is to examine the role of social background and education attainment in generating occupation mobility in the country. By applying an extended version of the RC association model to 68th round (2011-12) of the Employment and Unemployment Survey by the National Sample Survey Office of India, we found that the role of education is not important in generating occupation mobility in India, while social background plays a critical role in determining one\u2019s occupation. This study successfully highlights the strong intergenerational occupation immobility in the country and also the need to focus on education. In this regard, further studies are needed to uncover other crucial factors limiting the growth of individuals in the country.'\nauthor:\n- |\n A. Singh, Department of Economics and Finance, Pilani Campus, India,\\\n email: [anu.2singh7@gmail.com](maito: anu.2singh7@gmail.com)\n- 'A. Forcina, Dipartimento di" +"---\nabstract: 'Recent techniques for the task of short text clustering often rely on word embeddings as a transfer learning component. This paper shows that sentence vector representations from Transformers in conjunction with different clustering methods can be successfully applied to address the task. Furthermore, we demonstrate that the algorithm of enhancement of clustering via iterative classification can further improve initial clustering performance with different classifiers, including those based on pre-trained Transformer language models.'\nauthor:\n- |\n Leonid Pugachev\\\n Moscow Institute of\\\n Physics and Technology\\\n `leonid.pugachev@phystech.edu`\\\n Mikhail Burtsev\\\n Moscow Institute of\\\n Physics and Technology\\\ntitle: Short Text Clustering with Transformers\n---\n\nIntroduction\n============\n\nThere are currently a lot of techniques developed for short text clustering (STC), including topic models and neural networks. The most recent and successful approaches leverage transfer learning through the use of pre-trained word embeddings. In this work, we show that high quality for STC on the range of datasets can be achieved with modern sentence level transfer learning techniques as well. We use deep sentence representations obtained using the Universal Sentence Encoder (USE) [@DBLP:journals/corr/abs-1803-11175; @DBLP:journals/corr/abs-1907-04307].\n\nTraining of deep architectures can be effective for particular clustering tasks as well. However, application of deep models to clustering directly" +"---\nabstract: 'The input-output behaviour of the Wiener neuronal model subject to alternating input is studied under the assumption that the effect of such an input is to make the drift itself of an alternating type. Firing densities and related statistics are obtained via simulations of the sample-paths of the process in the following three cases: the drift changes occur during random periods characterized by [*(i)*]{} exponential distribution, [*(ii)*]{} Erlang distribution with a preassigned shape parameter, and [*(iii)*]{} deterministic distribution. The obtained results are compared with those holding for the Wiener neuronal model subject to sinusoidal input.'\nauthor:\n- |\n [, ]{} [, ]{}\\\n \\\n [(1) \u00a0Dipartimento di Matematica e Applicazioni, Universit\u00e0 di Napoli Federico II]{}\\\n [Via Cintia, I-80126 Napoli, Italy]{}\\\n [E-mail: aniello.buonocore@unina.it]{}\\\n \\\n [(2) \u00a0Dipartimento di Matematica e Informatica, Universit\u00e0 di Salerno]{}\\\n [Via Ponte don Melillo, I-84084 Fisciano (SA), Italy]{}\\\n [E-mail: adicrescenzo@unisa.it]{}\\\n \\\n [(3) \u00a0Dipartimento di Matematica, Universit\u00e0 degli Studi della Basilicata]{}\\\n [C.da\u00a0Macchia Romana, I-85100 Potenza, Italy]{}\\\n [E-mail: dinardo@unibas.it]{}\\\ntitle: |\n **Input-output behaviour of a model neuron\\\n with alternating drift** \n---\n\nepsf.sty\n\n[**Keywords:**]{} Wiener neuronal model; firing densities; alternating drift.\n\nIntroduction {#section:1}\n============\n\nDuring the last four decades numerous efforts have been devoted to the construction of mathematical" +"---\nabstract: 'A large amount of information has been published to online social networks every day. Individual privacy-related information is also possibly disclosed unconsciously by the end users. Identifying privacy-related data and protecting the online social network users from privacy leakage turn out to be significant. Under such a motivation, this study aims to propose and develop a hybrid privacy classification approach to detect and classify privacy information from OSNs. The proposed hybrid approach employs both deep learning models and ontology-based models for privacy-related information extraction. Extensive experiments are conducted to validate the proposed hybrid approach, and the empirical results demonstrate its superiority in assisting online social network users against privacy leakage.'\nauthor:\n- Jiaqi Wu\n- Weihua Li\n- Quan Bai\n- Takayuki Ito\n- Ahmed Moustafa\nbibliography:\n- 'bibfile.bib'\ntitle: 'Privacy Information Classification: A Hybrid Approach'\n---\n\nIntroduction\n============\n\nWith the proliferation and popularisation of the World Wide Web, Online Social Networks (OSNs) become of one of the essential channels for social interactions and communications [@batra2018characteristics; @cormode2008key]. OSNs provide great convenience to the users, but these online social platforms also raise potential risks, such as privacy leakage. A vast amount of private information can be accessed publicly through" +"---\nabstract: 'We describe optimization of a cryogenic magnetometer that uses nonlinear kinetic inductance in superconducting nanowires as the sensitive element instead of a superconducting quantum interference device (SQUID). The circuit design consists of a loop geometry with two nanowires in parallel, serving as the inductive section of a lumped LC resonator similar to a kinetic inductance detector (KID). This device takes advantage of the multiplexing capability of the KID, allowing for a natural frequency multiplexed readout. The Kinetic Inductance Magnetometer (KIM) is biased with a DC magnetic flux through the inductive loop. A perturbing signal will cause a flux change through the loop, and thus a change in the induced current, which alters the kinetic inductance of the nanowires, causing the resonant frequency of the KIM to shift. This technology has applications in astrophysics, material science, and the medical field for readout of Metallic Magnetic Calorimeters (MMCs), axion detection, and magnetoencephalography (MEG).'\nauthor:\n- Sasha Sypkens\n- Farzad Faramarzi\n- Marco Colangelo\n- Adrian Sinclair\n- Ryan Stephenson\n- Jacob Glasby\n- Peter Day\n- Karl Berggren\n- Philip Mauskopf\nbibliography:\n- 'references.bib'\ntitle: 'Development of an Array of Kinetic Inductance Magnetometers (KIMs)'\n---\n\n**Introduction**\n\nHighly sensitive magnetic sensors" +"---\nabstract: 'BNLP is an open-source language processing toolkit for Bengali consisting of tokenization, word embedding, part of speech(POS) tagging, name entity recognition(NER) facilities. BNLP provides pre-trained model with high accuracy to do model-based tokenization, embedding, POS, NER tasks for Bengali. BNLP pre-trained model achieves significant results in Bengali text tokenization, word embeddings, POS, and NER task. BNLP is being used widely by the Bengali research communities with 25K downloads, 138 stars, and 31 forks. BNLP is available at .'\nauthor:\n- |\n Sagor Sarker\\\n Begum Rokeya University, Rangpur, Bangladesh\\\n `brursagor@gmail.com`\\\nbibliography:\n- 'anthology.bib'\n- 'custom.bib'\ntitle: 'BNLP: Natural language processing toolkit for Bengali'\n---\n\n=1\n\nIntroduction\n============\n\nNatural language processing is one of the most important fields in computation linguistics. Tokenization, embedding, POS, NER, text classification, language modeling are some of the sub-tasks of NLP. Any computational linguistics researcher or developer needs hands-on tools to do these subtasks efficiently. Due to the recent advancement of NLP, there are so many tools and methods to do word tokenization, word embedding, POS, NER in the English language. NLTK [@loper], coreNLP [@manning_2014], spaCy [@spacy2], AllenNLP [@DBLP:journals/corr/abs-1803-07640], Flair [@akbik_2019], stanza [@qi2020stanza] are few of the tools. These tools provide a variety of methods" +"---\nabstract: 'We study quantum tomography from a continuous measurement record obtained by measuring expectation values of a set of Hermitian operators obtained from unitary evolution of an initial observable. For this purpose, we consider the application of a random unitary, diagonal in a fixed basis at each time step and quantify the information gain in tomography using Fisher information of the measurement record and the Shannon entropy associated with the eigenvalues of covariance matrix of the estimation. Surprisingly, very high fidelity of reconstruction is obtained using random unitaries diagonal in a fixed basis even though the measurement record is not informationally complete. We then compare this with the information generated and fidelities obtained by application of a different Haar random unitary at each time step. We give an upper bound on the maximal information that can be obtained in tomography and show that a covariance matrix taken from the Wishart-Laguerre ensemble of random matrices and the associated Marchenko-Pastur distribution saturates this bound. We find that physically, this corresponds to an application of a different Haar random unitary at each time step. We show that repeated application of random diagonal unitaries gives a covariance matrix in tomographic estimation that corresponds" +"---\nabstract: 'In industry NLP application, our manually labeled data has a certain number of noisy data. We present a simple method to find the noisy data and relabel them manually, meanwhile we collect the correction information. Then we present novel method to incorporate the human correction information into deep learning model. Human know how to correct noisy data. So the correction information can be inject into deep learning model. We do the experiment on our own text classification dataset, which is manually labeled, because we need to relabel the noisy data in our dataset for our industry application. The experiment result shows that our learn-on-correction method improve the classification accuracy from 91.7% to 92.5% in test dataset. The 91.7% accuracy is trained on the corrected dataset, which improve the baseline from 83.3% to 91.7% in test dataset. The accuracy under human evaluation achieves more than 97%.'\nauthor:\n- Tong Guo\ntitle: Learning From How Humans Correct\n---\n\nIntroduction\n============\n\nIn recent years, deep learning [@ref_proc2] and BERT-based [@ref_proc1] model have shown significant improvement on almost all the NLP tasks. However, past methods did not inject human correction information into the deep learning model. Human interact with the environment and" +"---\nabstract: 'Agent-based models of disease transmission involve stochastic rules that specify how a number of individuals would infect one another, recover or be removed from the population. Common yet stringent assumptions stipulate interchangeability of agents and that all pairwise contact are equally likely. Under these assumptions, the population can be summarized by counting the number of susceptible and infected individuals, which greatly facilitates statistical inference. We consider the task of inference without such simplifying assumptions, in which case, the population cannot be summarized by low-dimensional counts. We design improved particle filters, where each particle corresponds to a specific configuration of the population of agents, that take either the next or all future observations into account when proposing population configurations. Using simulated data sets, we illustrate that orders of magnitude improvements are possible over bootstrap particle filters. We also provide theoretical support for the approximations employed to make the algorithms practical.'\nauthor:\n- 'Nianqiao Ju [^1]'\n- Jeremy Heng\n- 'Pierre E. Jacob'\nbibliography:\n- 'ref.bib'\ntitle: 'Sequential Monte Carlo algorithms for agent-based models of disease transmission'\n---\n\nIntroduction \\[sec:intro\\]\n==========================\n\nStatistical inference for agent-based models\n--------------------------------------------\n\nAgent-based models also called individual-based models are used in many fields, such as" +"---\nabstract: 'We consider a fairness problem in resource allocation where multiple groups demand resources from a common source with the total fixed amount. The general model was introduced by Elzayn [*et al.*]{}\u00a0\\[FAT\\*\u201919\\]. We follow Donahue and Kleinberg\u00a0\\[FAT\\*\u201920\\] who considered the case when the demand distribution is known. We show that for many common demand distributions that satisfy sharp lower tail inequalities, a natural allocation that provides resources proportional to each group\u2019s average demand performs very well. More specifically, this natural allocation is approximately fair and efficient (i.e., it provides near maximum utilization). We also show that, when small amount of unfairness is allowed, the Price of Fairness (PoF), in this case, is close to 1.'\nauthor:\n- 'Vacharapat Mettanant[^1]'\n- 'Jittat Fakcharoenphol[^2]'\nbibliography:\n- 'fair.bib'\ntitle: |\n Fair Resource Allocation for Demands\\\n with Sharp Lower Tail Inequalities\n---\n\nIntroduction\n============\n\nResource allocation has been a central problem in computer science and operation research\u00a0[@gross1956class; @katoh1979polynomial; @SHI2015137]. Typically, to distribute resources well, there are many requirements to be considered. One of the most fundamental and important requirements is fairness\u00a0[@demers1989analysis; @procaccia2013cake; @eubanks2018automating]. When fairness is a factor, in a pioneering work, Elzayn [*et al.*]{}\u00a0[@ElzaynJJKNRS19] proposed a setting" +"---\nabstract: 'Future networks will pave the way for a myriad of applications with different requirements and Wi-Fi will play an important role in local area networks. This is why network slicing is proposed by 5G networks, allowing to offer multiple logical networks tailored to the different user requirements, over a common infrastructure. However, this is not supported by current Wi-Fi networks. In this paper, we propose a standard-compliant network slicing approach for the radio access segment of Wi-Fi by defining multiple per . We present two algorithms, one that assigns resources according to the requirements of slices in a static way, and another that dynamically configures the slices according to the network\u2019s conditions and relevant . The proposed algorithms were validated through extensive simulations, conducted in the *ns-3* network simulator, and complemented by theoretical assessments. The obtained results reveal that the two proposed slicing approaches outperform today\u2019s Wi-Fi access technique, reaching lower error probability for bandwidth intensive slices and lower latency for time-critical slices. Simultaneously, the proposed approach is up to 32 times more energy efficient, when considering slices tailored for low-power and low-bandwidth devices, while increasing the overall spectrum efficiency.'\nauthor:\n- \n- \nbibliography:\n- 'main.bib'\ntitle: '5G" +"---\nabstract: 'In this paper, we study quasi post-critically finite degenerations for rational maps. We construct limits for such degenerations as geometrically finite rational maps on a finite tree of Riemann spheres. We prove the boundedness for such degenerations of hyperbolic rational maps with Sierpinski carpet Julia set and give criteria for the convergence for quasi-Blaschke products $\\operatorname{\\mathcal{QB}}_d$, making progress towards the analogues of Thurston\u2019s compactness theorem for acylindrical $3$-manifold and the double limit theorem for quasi-Fuchsian groups in complex dynamics. In the appendix, we apply such convergence results to show the existence of certain polynomial matings.'\naddress: 'Dept. of Mathematics & University of Michigan, Ann Arbor, MI 48109 USA'\nauthor:\n- Yusheng Luo\ntitle: 'On geometrically finite degenerations II: convergence and divergence'\n---\n\nIntroduction {#sec:intro}\n============\n\nThe study of iterations of rational maps on Riemann sphere $\\hat{\\mathbb{C}}$ has been a central topic in dynamics. Classically, hyperbolic rational maps are easy to analyze. They form an open and conjecturally dense subset in the moduli space, and a connected component $\\mathcal{H}$ is called a [*hyperbolic component*]{}. The problem of how the hyperbolic components are positioned has been studied broadly in the literature, especially in low degrees [@DH85; @BH88; @Rees90].\n\nSince a" +"---\nabstract: 'Technologies play an important role in the hiring process for software professionals. Within this process, several studies revealed misconceptions and bad practices which lead to suboptimal recruitment experiences. In the same context, grey literature anecdotally coined the term *R\u00e9sum\u00e9-Driven Development* (RDD), a phenomenon describing the overemphasis of trending technologies in both job offerings and resumes as an interaction between employers and applicants. While RDD has been sporadically mentioned in books and online discussions, there are so far no scientific studies on the topic, despite its potential negative consequences. We therefore empirically investigated this phenomenon by surveying 591 software professionals in both hiring (130) and technical (558) roles and identified RDD facets in substantial parts of our sample: 60% of our hiring professionals agreed that trends influence their job offerings, while 82% of our software professionals believed that using trending technologies in their daily work makes them more attractive for prospective employers. Grounded in the survey results, we conceptualize a theory to frame and explain R\u00e9sum\u00e9-Driven Development. Finally, we discuss influencing factors and consequences and propose a definition of the term. Our contribution provides a foundation for future research and raises awareness for a potentially systemic trend that may" +"---\nabstract: 'We investigate the emergence of ferromagnetism in the two-dimensional metal-halide CoBr$_2$, with a special focus on the role of electronic correlations. The calculated phonon spectrum shows that the system is thermodynamically stable unlike other Co halides. We apply two well-known methods for the estimation of the Curie temperature. First, we do DFT+U calculations to calculate exchange couplings, which are subsequently used in a classical Monte Carlo simulation of the resulting Ising spin model. The transition temperature calculated in this way is in the order of 100K, but shows a strong dependence on the choice of interaction parameters. Second, we apply dynamical mean-field theory to calculate the correlated electronic structure and estimate the transition temperature. This results in a similar estimate for a noticeable transition temperature of approximately $100$K, however, without the strong dependence on the interaction parameters.'\nauthor:\n- Hrishit Banerjee\n- Markus Aichhorn\nbibliography:\n- 'main.bib'\ntitle: 'Importance of electronic correlations for the magnetic properties of the two-dimensional ferromagnet CoBr$_2$'\n---\n\nIntroduction\n============\n\nThere has been a lot of recent excitement about functional two-dimensional (2D) materials, which provide opportunities to venture into largely unexplored regions of materials space. On one hand, their thin-film like nature makes them" +"---\nauthor:\n- 'Shinji Hara${}^{1\\dagger}$, Tetsuya Iwasaki${}^{2}$ and Yutaka Hori${}^{3}$'\ntitle: |\n Robust Instability Radius for Multi-agent Dynamical Systems\\\n with Cyclic Structure\n---\n\nIntroduction {#sec:Intro}\n============\n\nThere are a number of interesting and important periodic oscillation phenomena in biology such as Repressilator [@Elowitz2000] in synthetic biology, spike-type periodic signals in neuronal dynamics [@FHNmodel], periodic pattern generation by Turing instability [@YMKH:MBMC2015], and so on. Many of these cases are related to instability of the linearized model around an equilibrium point, and it is generally difficult to derive the exact mathematical models and reduced order approximate models are often utilized for the analysis. Hence, robust instability analysis against dynamic uncertainties is very important to analyze the persistence of oscillation phenomena theoretically.\n\nMotivated by this, the authors have proposed a robust instability problem as a new control problem [@HIH:LCSS2020; @HIH:Automatica2020]. It should be emphasized that the robust instability analysis is similar to but quite different from the robust stability analysis. Actually, the former is a strong stabilization problem [@Youla:Automatica1974] to find a minimum norm stable perturbation that stabilizes a given unstable system when the uncertainty is modeled by a ball measured by the $H_\\infty$ norm. This clearly indicates the difficulty of the problem." +"---\nabstract: 'We report on the detection of source noise in the time domain at 162MHz with the Murchison Widefield Array. During the observation the flux of our target source Virgo A (M87) contributes only $\\sim$1% to the total power detected by any single antenna, thus this source noise detection is made in an intermediate regime, where the source flux detected by the entire array is comparable with the noise from a single antenna. The magnitude of source noise detected is precisely in line with predictions. We consider the implications of source noise in this moderately strong regime on observations with current and future instruments.'\nauthor:\n- 'J .S. Morgan,$^{1}$ R. Ekers,$^{1,2}$'\nbibliography:\n- 'refs.bib'\ntitle: 'A Measurement of Source Noise at Low Frequency: Implications for Modern Interferometers'\n---\n\nTechniques: interferometric \u2013 Instrumentation: interferometers \u2013 Radio continuum: general \u2013 Radio lines: general \u2013 Radiation mechanisms: general\n\nINTRODUCTION\n============\n\nSource noise (also known as self noise, wave noise or Hanbury Brown Twiss noise) arises from the fact that most sources studied in radio astronomy are themselves intrinsically noise-like: i.e. stochastic, ergodic, Gaussian random noise [@1999ASPC..180..671R; @2017isra.book.....T 1.2]. Since these natural sources are typically very weak relative to other sources of noise" +"---\nabstract: 'In this work we address the issue of validating the monodomain equation used in combination with the Bueno-Orovio ionic model for the prediction of the activation times in cardiac electro-physiology of the left ventricle. To this aim, we consider four patients who suffered from Left Bundle Branch Block (LBBB). We use activation maps performed at the septum as input data for the model and maps at the epicardial veins for the validation. In particular, a first set (half) of the latter are used to estimate the conductivities of the patient and a second set (the remaining half) to compute the errors of the numerical simulations. We find an excellent agreement between measures and numerical results. Our validated computational tool could be used to accurately predict activation times at the epicardial veins with a short mapping, i.e. by using only a part (the most proximal) of the standard acquisition points, thus reducing the invasive procedure and exposure to radiation.'\naddress:\n- 'MOX, Dipartimento di Matematica, Politecnico di Milano, Milan, Italy'\n- 'LABS, Dipartimento di Chimica, Materiali e Ingegneria Chimica \u201cGiulio Natta\u201d, Politecnico di Milano, Milan, Italy'\n- 'Divisione di Cardiologia, Ospedale S. Maria del Carmine, Rovereto (TN), Italy'\n-" +"---\nabstract: 'We present mock catalogs created to support the interpretation of the CANDELS survey. We extract halos along past lightcones from the Bolshoi Planck dissipationless N-body simulations and populate these halos with galaxies using two different independently developed semi-analytic models of galaxy formation and the empirical model [UniverseMachine]{}. Our mock catalogs have geometries that encompass the footprints of observations associated with the five CANDELS fields. In order to allow field-to-field variance to be explored, we have created eight realizations of each field. In this paper, we present comparisons with observable global galaxy properties, including counts in observed frame bands, luminosity functions, color-magnitude distributions and color-color distributions. We additionally present comparisons with physical galaxy parameters derived from SED fitting for the CANDELS observations, such as stellar masses and star formation rates. We find relatively good agreement between the model predictions and CANDELS observations for luminosity and stellar mass functions. We find poorer agreement for colors and star formation rate distributions. All of the mock lightcones as well as curated \u201ctheory friendly\u201d versions of the observational CANDELS catalogs are made available through a web-based data hub.'\nauthor:\n- |\n Rachel S. Somerville$^{1,2}$[^1], Charlotte Olsen$^{2}$, L. Y. Aaron Yung$^{1,2}$, Camilla Pacifici$^3$, Henry" +"---\nabstract: |\n Existing approaches to Dialogue State Tracking (DST) rely on turn level dialogue state annotations, which are expensive to acquire in large scale. In call centers, for tasks like managing bookings or subscriptions, the user goal can be associated with actions (e.g.\u00a0API calls) issued by customer service agents. These action logs are available in large volumes and can be utilized for learning dialogue states. However, unlike turn-level annotations, such logged actions are only available sparsely across the dialogue, providing only a form of weak supervision for DST models.\n\n To efficiently learn DST with sparse labels, we extend a state-of-the-art encoder-decoder model. The model learns a slot-aware representation of dialogue history, which focuses on relevant turns to guide the decoder. We present results on two public multi-domain DST datasets (MultiWOZ and Schema Guided Dialogue) in both settings i.e. training with [*turn-level*]{} and with [*sparse*]{} supervision. The proposed approach improves over baseline in both settings. More importantly, our model trained with sparse supervision is competitive in performance to fully supervised baselines, while being more data and cost efficient.\nauthor:\n- |\n Shuailong Liang ^1^, Lahari Poddar^2^, Gyuri Szarvas^2^\\\n ^1^Singapore University of Technology and Design\\\n , ^2^Amazon Development Center Germany" +"---\nabstract: 'Recently, there has been a large amount of work towards fooling deep-learning-based classifiers, particularly for images, via adversarial inputs that are visually similar to the benign examples. However, researchers usually use $L_p$-norm minimization as a proxy for imperceptibility, which oversimplifies the diversity and richness of real-world images and human visual perception. In this work, we propose a novel perceptual metric utilizing the well-established connection between the low-level image feature fidelity and human visual sensitivity, where we call it *Perceptual Feature Fidelity Loss*. We show that our metric can robustly reflect and describe the imperceptibility of the generated adversarial images validated in various conditions. Moreover, we demonstrate that this metric is highly flexible, which can be conveniently integrated into different existing optimization frameworks to guide the noise distribution for better imperceptibility. The metric is particularly useful in the challenging black-box attack with limited queries, where the imperceptibility is hard to achieve due to the non-trivial perturbation power.'\nauthor:\n- |\n Pengrui Quan[^1]\\\n University of California, Los Angeles\\\n Los Angeles, U.S.\\\n [prquan@g.ucla.edu]{}\n- |\n Ruiming Guo$^*$\\\n The Chinese University of Hong Kong\\\n Shatin, Hong Kong\\\n [greenming@link.cuhk.edu.hk]{}\n- |\n Mani Srivastava\\\n University of California, Los Angeles\\\n Los Angeles, U.S.\\\n [mbs@ucla.edu]{}\nbibliography:" +"---\nabstract: 'An explosive percolation transition is the abrupt emergence of a giant cluster at a threshold caused by a suppression of the growth of large clusters. In this paper, we consider the information entropy of the cluster size distribution, which is the probability distribution for the size of a randomly chosen cluster. It has been reported that information entropy does not reach its maximum at the threshold in explosive percolation models, a result seemingly contrary to other previous results that the cluster size distribution shows power-law behavior and the cluster size diversity (number of distinct cluster sizes) is maximum at the threshold. Here, we show that this phenomenon is due to that the scaling form of the cluster size distribution is given differently below and above the threshold. We also establish the scaling behaviors of the first and second derivatives of the information entropy near the threshold to explain why the first derivative has a negative minimum at the threshold and the second derivative diverges negatively (positively) at the left (right) limit of the threshold, as predicted through previous simulation.'\nauthor:\n- Yejun Kang\n- Young Sul Cho\ntitle: Scaling behaviors of information entropy in explosive percolation transitions\n---" +"---\nabstract: 'The 1D hinge states are the hallmark of the 3D higher-order topological insulators (HOTI), which may lead to interesting transport properties. Here, we study the Aharonov-Bohm (AB) effect in the interferometer constructed by the hinge states in the normal metal-HOTI junctions with a transverse magnetic field. We show that the AB oscillation of the conductance can clearly manifest the spatial configurations of such hinge states. The magnetic fluxes encircled by various interfering loops are composed of two basic ones, so that the oscillation of the conductance by varying the magnetic field contains different frequency components universally related to each other. Specifically, the four dominant frequencies $\\omega_{x,y}$ and $\\omega_{x\\pm y}$ satisfy the relations $\\omega_{x\\pm y}=\\omega_x\\pm\\omega_y$, which generally holds for different magnetic field, sample size, bias voltage and weak disorder. Our results provide a unique and robust signature of the hinge states and pave the way for exploring AB effect in the 3D HOTI.'\nauthor:\n- Kun Luo\n- Hao Geng\n- Li Sheng\n- Wei Chen\n- 'D. Y. Xing'\ntitle: 'Aharonov-Bohm effect in three-dimensional higher-order topological insulators'\n---\n\nINTRODUCTION\n============\n\nOver the past two decades, topological phases of matter such as topological insulator and superconductor have become an" +"---\nabstract: 'We demonstrate that matching the symmetry properties of a reservoir computer (RC) to the data being processed dramatically increases its processing power. We apply our method to the parity task, a challenging benchmark problem that highlights inversion and permutation symmetries, and to a chaotic system inference task that presents an inversion symmetry rule. For the parity task, our symmetry-aware RC obtains zero error using an exponentially reduced neural network and training data, greatly speeding up the time to result and outperforming artificial neural networks. When both symmetries are respected, we find that the network size $N$ necessary to obtain zero error for 50 different RC instances scales linearly with the parity-order $n$. Moreover, some symmetry-aware RC instances perform a zero error classification with only $N=1$ for $n\\leq7$. Furthermore, we show that a symmetry-aware RC only needs a training data set with size on the order of $(n+n/2)$ to obtain such performance, an exponential reduction in comparison to a regular RC which requires a training data set with size on the order of $n2^n$ to contain all $2^n$ possible $n-$bit-long sequences. For the inference task, we show that a symmetry-aware RC presents a normalized root-mean-square error three orders-of-magnitude smaller" +"---\nabstract: 'With the open-source revolution, source codes are now more easily accessible than ever. This has, however, made it easier for malicious users and institutions to copy the code without giving regards to the license, or credit to the original author. Therefore, source code author identification is a critical task with paramount importance. In this paper, we propose ICodeNet - a hierarchical neural network that can be used for source code file-level tasks. The ICodeNet processes source code in image format and is employed for the task of per file author identification. The ICodeNet consists of an ImageNet trained VGG encoder followed by a shallow neural network. The shallow network is based either on CNN or LSTM. Different variations of models are evaluated on a source code author classification dataset. We have also compared our image-based hierarchical neural network model with simple image-based CNN architecture and text-based CNN and LSTM models to highlight its novelty and efficiency.'\nauthor:\n- Pranali Bora\n- Tulika Awalgaonkar\n- Himanshu Palve\n- Raviraj Joshi\n- Purvi Goel\nbibliography:\n- 'main.bib'\ntitle: 'ICodeNet - A Hierarchical Neural Network Approach for Source Code Author Identification[^1]'\n---\n\nIntroduction\n============\n\nAs the amount of publicly available source" +"---\nabstract: |\n This paper studies the problem of recovering the hidden vertex correspondence between two edge-correlated random graphs. We focus on the Gaussian model where the two graphs are complete graphs with correlated Gaussian weights and the [Erd\u0151s-R\u00e9nyi]{}model where the two graphs are subsampled from a common parent [Erd\u0151s-R\u00e9nyi]{}graph ${{\\mathcal{G}}}(n,p)$. For dense [Erd\u0151s-R\u00e9nyi]{}graphs with $p=n^{-o(1)}$, we prove that there exists a sharp threshold, above which one can correctly match all but a vanishing fraction of vertices and below which correctly matching any positive fraction is impossible, a phenomenon known as the \u201call-or-nothing\u201d phase transition. Even more strikingly, in the Gaussian setting, above the threshold all vertices can be exactly matched with high probability. In contrast, for sparse [Erd\u0151s-R\u00e9nyi]{}graphs with $p=n^{-\\Theta(1)}$, we show that the all-or-nothing phenomenon no longer holds and we determine the thresholds up to a constant factor. Along the way, we also derive the sharp threshold for exact recovery, sharpening the existing results in Erd\u0151s-R\u00e9nyi graphs\u00a0[@cullina2016improved; @cullina2017exact].\n\n The proof of the negative results builds upon a tight characterization of the mutual information based on the truncated second-moment computation in\u00a0[@wu2020testing] and an \u201carea theorem\u201d that relates the mutual information to the integral of the reconstruction error." +"---\nabstract: 'Mapping the thermal transport properties of materials at the nanoscale is of critical importance for optimizing heat conduction in nanoscale devices. Several methods to determine the thermal conductivity of materials have been developed, most of them yielding an average value across the sample, thereby disregarding the role of local variations. Here, we present a method for the spatially-resolved assessment of the thermal conductivity of suspended graphene by using a combination of confocal Raman thermometry and a finite-element calculations-based fitting procedure. We demonstrate the working principle of our method by extracting the two-dimensional thermal conductivity map of one pristine suspended single-layer graphene sheet and one irradiated using helium ions. Our method paves the way for spatially resolving the thermal conductivity of other types of layered materials. This is particularly relevant for the design and engineering of nanoscale thermal circuits (e.g. thermal diodes).'\nauthor:\n- Oliver Braun\n- Roman Furrer\n- Pascal Butti\n- Kishan Thodkar\n- Ivan Shorubalko\n- Ilaria Zardo\n- Michel Calame\n- 'Mickael L. Perrin'\nbibliography:\n- 'References\\_20210315.bib'\ntitle: 'Spatially mapping the thermal conductivity of graphene by an opto-thermal method'\n---\n\nKeywords: graphene, thermal conductivity, Raman spectroscopy, two-dimensional mapping, suspended\n\n![image](Figures/TOC_V8.pdf){width=\"\\linewidth\"}\n\nIntroduction\n============\n\nThermal properties of" +"---\nabstract: 'The human insulin-glucose metabolism is a time-varying process, which is partly caused by the changing insulin sensitivity of the body. This insulin sensitivity follows a circadian rhythm and its effects should be anticipated by any automated insulin delivery system. This paper presents an extension of our previous work on automated insulin delivery by developing a controller suitable for humans with Type 1 Diabetes Mellitus. Furthermore, we enhance the controller with a new kernel function for the Gaussian Process and deal with noisy measurements, as well as, the noisy training data for the Gaussian Process, arising therefrom. This enables us to move the proposed control algorithm, a combination of Model Predictive Controller and a Gaussian Process, closer towards clinical application. Simulation results on the University of Virginia/Padova FDA-accepted metabolic simulator are presented for a meal schedule with random carbohydrate sizes and random times of carbohydrate uptake to show the performance of the proposed control scheme.'\nauthor:\n- 'Lukas Ortmann$^{1}$, Dawei Shi$^{2}$, Eyal Dassau$^{2}$, Francis J. Doyle III$^{2}$, Berno J.E. Misgeld$^{1}$, Steffen Leonhardt$^{1}$ [^1] [^2] [^3]'\nbibliography:\n- 'IEEEabrv.bib'\n- 'ACC2019.bib'\ntitle: '**Automated Insulin Delivery for Type 1 Diabetes Mellitus Patients using Gaussian Process-based Model Predictive Control** '\n---\n\n(15mm,10mm)" +"---\nabstract: 'Recent advances in deep learning techniques have enabled machines to generate cohesive open-ended text when prompted with a sequence of words as context. While these models now empower many downstream applications from conversation bots to automatic storytelling, they have been shown to generate texts that exhibit social biases. To systematically study and benchmark social biases in open-ended language generation, we introduce the Bias in Open-Ended Language Generation Dataset (BOLD), a large-scale dataset that consists of 23,679 English text generation prompts for bias benchmarking across five domains: profession, gender, race, religion, and political ideology. We also propose new automated metrics for toxicity, psycholinguistic norms, and text gender polarity to measure social biases in open-ended text generation from multiple angles. An examination of text generated from three popular language models reveals that the majority of these models exhibit a larger social bias than human-written Wikipedia text across all domains. With these results we highlight the need to benchmark biases in open-ended language generation and caution users of language generation models on downstream tasks to be cognizant of these embedded prejudices.'\nauthor:\n- Jwala Dhamala\n- Tony Sun\n- Varun Kumar\n- Satyapriya Krishna\n- Yada Pruksachatkun\n- 'Kai-Wei Chang'\n-" +"---\nabstract: 'We present an alternative perspective on the training of generative adversarial networks (GANs), showing that the training step for a GAN generator decomposes into two implicit subproblems. In the first, the discriminator provides new target data to the generator in the form of *inverse examples* produced by approximately inverting classifier labels. In the second, these examples are used as targets to update the generator via least-squares regression, regardless of the main loss specified to train the network. We experimentally validate our main theoretical result and demonstrate significant improvements over standard GAN training made possible by making these subproblems explicit.'\nauthor:\n- |\n Romann M.\u00a0Weber\\\n DisneyResearchStudios\\\n Zurich, Switzerland\\\n `romann.weber@disneyresearch.com`\\\nbibliography:\n- 'example\\_paper.bib'\ntitle: |\n Exploiting the Hidden Tasks of GANs:\\\n Making Implicit Subproblems Explicit\n---\n\nIntroduction {#intro}\n============\n\nSoon after their introduction, generative adversarial networks (GANs) [@goodfellow2014generative] quickly became the gold standard in implicit generative modeling. In particular, when it comes to image generation, GANs generally achieve sharper and more convincing results than most of their non-adversarial counterparts (e.g.\u00a0[@karras2020analyzing]). Nevertheless, despite the flood of research into GANs and recent valuable insight into best practices for training these often temperamental models, a fundamental understanding of what makes GANs" +"---\nauthor:\n- 'Christian Br\u00f8nnum-Hansen'\n- 'and Chen-Yu Wang'\nbibliography:\n- 'references.bib'\ntitle: 'Top quark contribution to two-loop helicity amplitudes for boson pair production in gluon fusion'\n---\n\nIntroduction\n============\n\nProduction of $Z$ boson pairs is an important process at the LHC. The gluon fusion channel, $gg \\to ZZ$, is loop-induced. For this reason, it is suppressed by the strong coupling constant $\\alpha_{s}$ in comparison to the quark annihilation channel $q \\overline{q} \\to ZZ$ which enters at tree level. However, the large gluon flux as well as event selection enhance the contribution of the gluon fusion channel to the hadronic cross section\u00a0[@Binoth:2006mf]. Therefore this production mode is essential for a reliable description of $Z$ boson pair production.\n\nThe current status of the amplitude calculations for this process is as follows. The one-loop amplitude was calculated long ago\u00a0[@Glover:1988rg; @Glover:1988fe]. The two-loop amplitude for massless internal quarks is also known\u00a0[@Caola:2015ila; @vonManteuffel:2015msa]. However, until very recently, contributions of massive quarks have only been calculated approximately\u00a0[@Melnikov:2015laa; @Davies:2020lpf]. The goal of this paper is to present a calculation of the $gg \\to ZZ$ two-loop amplitude keeping the dependence on the top quark mass. We note that when this paper was being" +"---\nabstract: 'Based on decision trees, it is efficient to handle tabular data. Conventional decision tree growth methods often result in suboptimal trees because of their greedy nature. Their inherent structure limits the options of hardware to implement decision trees in parallel. Here is a compact representation of binary decision trees to overcome these deficiencies. We explicitly formulate the dependence of prediction on binary tests for binary decision trees and construct a function to guide the input sample from the root to the appropriate leaf node. And based on this formulation we introduce a new interpretation of binary decision trees. Then we approximate this formulation via continuous functions. Finally we interpret decision tree as a model combination method. And we propose the selection-prediction scheme to unify a few learning methods.'\nauthor:\n- |\n Jinxiong Zhang\\\n jinxiongzhang@qq.com\nbibliography:\n- 'ICML.bib'\ntitle: 'Decision Machines: Interpreting Decision Tree as a Model Combination Method'\n---\n\nIntroduction\n============\n\nThe conventional decision tree induction is to recursively partition the training set. During the tree induction, the training set is divided into smaller and smaller subsets according to the test functions of minimum split criteria until a stopping criterion is reached. As a result, it can be" +"---\nabstract: 'We address the problem of tensor decomposition in application to direction-of-arrival (DOA) estimation for transmit beamspace (TB) multiple-input multiple-output (MIMO) radar. A general 4-order tensor model that enables computationally efficient DOA estimation is designed. Whereas other tensor decomposition-based methods treat all factor matrices as arbitrary, the essence of the proposed DOA estimation method is to fully exploit the Vandermonde structure of the factor matrices to take advantage of the shift-invariance between and within different subarrays. Specifically, the received signal of TB MIMO radar is expressed as a 4-order tensor. Depending on the target Doppler shifts, the constructed tensor is reshaped into two distinct 3-order tensors. A computationally efficient tensor decomposition method is proposed to decompose the Vandermonde factor matrices. The generators of the Vandermonde factor matrices are computed to estimate the phase rotations between subarrays, which can be utilized as a look-up table for finding target DOA. It is further shown that our proposed method can be used in a more general scenario where the subarray structures can be arbitrary but identical. The proposed DOA estimation method requires no prior information about the tensor rank and is guaranteed to achieve precise decomposition result. Simulation results illustrate the performance" +"---\nabstract: |\n **Purpose:** The aim of this work is to develop a high-performance, flexible and easy-to-use MRI reconstruction framework using the scientific programming language Julia.\\\n **Methods:**\\\n Julia is a modern, general purpose programming language with strong features in the area of signal / image processing and numerical computing. It has a high-level syntax but still generates efficient machine code that is usually as fast as comparable C/C++ applications. In addition to the language features itself, Julia has a sophisticated package management system that makes proper modularization of functionality across different packages feasible. Our developed MRI reconstruction framework MRIReco.jl can therefore reuse existing functionality from other Julia packages and concentrate on the MRI-related parts. This includes common imaging operators and support for MRI raw data formats.\\\n **Results:**\\\n MRIReco.jl is a simple to use framework with a high degree of accessibility. While providing a simple-to-use interface, many of its components can easily be extended and customized. The performance of MRIReco.jl is compared to the Berkeley Advanced Reconstruction Toolbox (BART) and we show that the Julia framework achieves comparable reconstruction speed as the popular C/C++ library.\\\n **Conclusion:**\\\n Modern programming languages can bridge the gap between high performance and accessible implementations. MRIReco.jl leverages" +"---\nabstract: 'Deep learning-based reduced order models (DL-ROMs) have been recently proposed to overcome common limitations shared by conventional reduced order models (ROMs) \u2013 built, e.g., through proper orthogonal decomposition (POD) \u2013 when applied to nonlinear time-dependent parametrized partial differential equations (PDEs). These might be related to [*(i)*]{} the need to deal with projections onto high dimensional linear approximating trial manifolds, [*(ii)*]{} expensive hyper-reduction strategies, or [*(iii)*]{} the intrinsic difficulty to handle physical complexity with a linear superimposition of modes. All these aspects are avoided when employing DL-ROMs, which learn in a non-intrusive way both the nonlinear trial manifold and the reduced dynamics, by relying on deep (e.g., feedforward, convolutional, autoencoder) neural networks. Although extremely efficient at testing time, when evaluating the PDE solution for any new testing-parameter instance, DL-ROMs require an expensive training stage, because of the extremely large number of network parameters to be estimated. In this paper we propose a possible way to avoid an expensive training stage of DL-ROMs, by [*(i)*]{} performing a prior dimensionality reduction through POD, and [*(ii)*]{} relying on a multi-fidelity pretraining stage, where different physical models can be efficiently combined. The proposed POD-DL-ROM is tested on several (both scalar and vector, linear" +"---\nabstract: 'We demonstrate a machine learning approach designed to extract hidden chemistry/physics to facilitate new materials discovery. In particular, we propose a novel method for learning latent knowledge from material structure data in which machine learning models are developed to present the possibility that an atom can be paired with a chemical environment in an observed materials. For this purpose, we trained deep neural networks acquiring information from the atom of interest and its environment to estimate the possibility. The models were then used to establish recommendation systems, which can suggest a list of atoms for an environment within a structure. The center atom of that environment was then replaced with the various recommended atoms to generate new structures. Based on these recommendations, we also propose a method of dissimilarity measurement between the atoms and, through hierarchical cluster analysis and visualization using the multidimensional scaling algorithm, illustrate that this dissimilarity can reflect the chemistry of the elements. Finally, our models were applied to the discovery of new structures in the well-known magnetic material Nd$_2$Fe$_{14}$B. Our models propose 108 new structures, 71 of which are confirmed to converge to local-minimum-energy structures with formation energy less than 0.1 eV by first-principles" +"---\nabstract: 'The past year has seen numerous publications underlining the importance of a space mission to the ice giants in the upcoming decade. Proposed mission plans involve a $\\sim$10 year cruise time to the ice giants. This cruise time can be utilized to search for low-frequency gravitational waves (GWs) by observing the Doppler shift caused by them in the Earth\u2013spacecraft radio link. We calculate the sensitivity of prospective ice giant missions to GWs. Then, adopting a steady-state black hole binary population, we derive a conservative estimate for the detection rate of extreme mass ratio inspirals (EMRIs), supermassive\u2013 (SMBH) and stellar mass binary black hole (sBBH) mergers. We link the SMBH population to the fraction of quasars $f_{\\mathrm{bin}}$ resulting from galaxy mergers that pair SMBHs to a binary. For a total of ten 40-day observations during the cruise of a single spacecraft, $\\mathcal{O}(f_{\\mathrm{bin}})\\sim0.5$ detections of SMBH mergers are likely, if Allan deviation of Cassini-era noise is improved by $\\sim 10^2$ in the $10^{-5}-10^{-3}$ Hz range. For EMRIs the number of detections lies between $\\mathcal{O}(0.1) - \\mathcal{O}(100)$. Furthermore, ice giant missions combined with the Laser Interferometer Space Antenna (LISA) would improve the localisation by an order of magnitude compared to LISA" +"---\nabstract: 'SN\u00a02017ein is a narrow\u2013lined Type Ic SN that was found to share a location with a point\u2013like source in the face on spiral galaxy NGC 3938 in pre\u2013supernova images, making SN\u00a02017ein the first credible detection of a Type Ic progenitor. Results in the literature suggest this point\u2013like source is likely a massive progenitor of 60\u201380 [M$_{\\odot}$]{}, depending on if the source is a binary, a single star, or a compact cluster. Using new photometric and spectral data collected for 200 days, including several nebular spectra, we generate a consistent model covering the photospheric and nebular phase using a Monte Carlo radiation transport code. Photospheric phase modelling finds an ejected mass 1.2\u20132.0 [M$_{\\odot}$]{}\u00a0with an [$E_\\mathrm{k}$]{}\u00a0of $\\sim(0.9 \\pm0.2)\\times 10^{51}$ erg, with approximately 1 [M$_{\\odot}$]{}\u00a0of material below 5000 [km\u00a0s$^{-1}$]{}\u00a0found from the nebular spectra. Both photospheric and nebular phase modelling suggests a [$^{56}$Ni]{}\u00a0mass of 0.08\u20130.1 [M$_{\\odot}$]{}. Modelling the \\[[O\u00a0[i]{}]{}\\] emission feature in the nebular spectra suggests the innermost ejecta is asymmetric. The modelling results favour a low mass progenitor of to 16\u201320 [M$_{\\odot}$]{} which is in disagreement with the pre\u2013supernova derived high mass progenitor. This contradiction is likely due to the pre\u2013supernova source" +"---\nabstract: 'Euclidean volume ratios between quantum states with positive partial transpose and all quantum states in bipartite systems are investigated. These ratios allow a quantitative exploration of the typicality of entanglement and of its detectability by Bell inequalities. For this purpose a new numerical approach is developed. It is based on the Peres-Horodecki criterion, on a characterization of the convex set of quantum states by inequalities resulting from Newton identities and from Descartes\u2019 rule of signs, and on a numerical approach involving the multiphase Monte Carlo method and the hit-and-run algorithm. This approach confirms not only recent analytical and numerical results on two-qubit, qubit\u2013qutrit, and qubit\u2013four-level qudit states but also allows for a numerically reliable numerical treatment of so far unexplored qutrit\u2013qutrit states. Based on this numerical approach with the help of the Clauser-Horne-Shimony-Holt inequality and the Collins-Gisin inequality the degree of detectability of entanglement is investigated for two-qubit quantum states. It is investigated quantitatively to which extent a combined test of both Bell inequalities can increase the detectability of entanglement beyond what is achievable by each of these inequalities separately.'\naddress:\n- 'Institut f\u00fcr Angewandte Physik, Technische Universit\u00e4t Darmstadt, D-64289 Darmstadt, Germany'\n- 'Peter Gr\u00fcnberg Institute (PGI-8), Forschungszentrum" +"---\nabstract: 'This paper is concerned with complexity theoretic aspects of a general formulation of quantum game theory that models strategic interactions among rational agents that process and exchange quantum information. In particular, we prove that the computational problem of finding an approximate Nash equilibrium in a broad class of quantum games is, like the analogous problem for classical games, included in (and therefore complete for) the complexity class $\\mathrm{PPAD}$. Our main technical contribution, which facilitates this inclusion, is an extension of prior methods in computational game theory to strategy spaces that are characterized by semidefinite programs.'\nauthor:\n- John Bostanci\n- John Watrous\ntitle: Quantum game theory and the complexity of approximating quantum Nash equilibria\n---\n\nIntroduction\n============\n\nGame theory is a fascinating topic of study with connections to computer science, economics, and the social sciences, among other subjects. This paper focuses on complexity theoretic aspects of game theory within the context of quantum information and computation.\n\nQuantum game theory began with the work of David Meyer [@Meyer1999] and Jens Eisert, Martin Wilkens, and Maciej Lewenstein [@EisertWL1999] in 1999.[^1] These works investigated games involving quantum information, highlighting examples in which quantum players have advantages over classical players. Many other" +"---\nabstract: 'Using a minimal model of active Brownian discs, we study the effect of a crucial parameter, namely the softness of the inter-particle repulsion, on motility-induced phase separation. We show that an increase in particle softness reduces the ability of the system to phase-separate and the system exhibit a delayed transition. After phase separation, the system state properties can be explained by a single relevant lengthscale, the effective inter-particle distance. We estimate this lengthscale analytically and use it to rescale the state properties at dense phase for systems with different interaction softness. Using this lengthscale, we provide a scaling relation for the time taken to phase separate which shows a high sensitivity to the interaction softness.'\nauthor:\n- Monika Sanoria\n- Raghunath Chelakkot\n- Amitabha Nandi\nbibliography:\n- 'scn\\_pre\\_rapid.bib'\ntitle: Influence of interaction softness in phase separation of active particles\n---\n\nIntroduction\n============\n\nThe last two decades witnessed a growing interest in the study of *active matter* [@Ramaswamy2010; @vicsek2012collective; @Marchetti2013], a system microscopically composed of a collection of motile entities that drive the system out-of-equilibrium. The collective behaviour of such active systems has been studied using particle-based numerical models where individual active agents self-propel along a body-fixed polarity vector." +"---\nabstract: 'We obtain exact densities of contractible and non-contractible loops in the O(1) model on a strip of the square lattice rolled into an infinite cylinder of finite even circumference $L$. They are also equal to the densities of critical percolation clusters on forty five degree rotated square lattice rolled into a cylinder, which do not or do wrap around the cylinder respectively. The results are presented as explicit rational functions of $L$ taking rational values for any even $L$. Their asymptotic expansions in the large $L$ limit have irrational coefficients reproducing the earlier results in the leading orders. The solution is based on a mapping to the six-vertex model and the use of technique of Baxter\u2019s T-Q equation.'\naddress: |\n [Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, 141980, Dubna, Russia]{}\\\n [National Research University Higher School of Economics, 20 Myasnitskaya, 101000, Moscow, Russia]{}\nauthor:\n- '[A.M. Povolotsky]{}'\ntitle: 'Exact densities of loops in O(1) dense loop model and of clusters in critical percolation on a cylinder. '\n---\n\n[*Keywords*]{}: [O(n) loop models, percolation, six-vertex model, Baxter\u2019s T-Q equation]{}\n\nIntroduction\n============\n\nThe subject of this Letter, $O(1)$ dense loop model (DLM), is a particular case of $O(n)$" +"---\nabstract: |\n **Abstract**\n\n [In this paper, we consider an imperfect finite beam lying on a nonlinear foundation, whose dimensionless stiffness is reduced from $1$ to $k$ as the beam deflection increases. Periodic equilibrium solutions are found analytically and are in good agreement with a numerical resolution, suggesting that localized buckling does not appear for a finite beam. The equilibrium paths may exhibit a limit point whose existence is related to the imperfection size and the stiffness parameter $k$ through an explicit condition. The limit point decreases with the imperfection size while it increases with the stiffness parameter. We show that the decay/growth rate is sensitive to the restoring force model. The analytical results on the limit load may be of particular interest for engineers in structural mechanics]{}.\nauthor:\n- 'R. Lagrange'\nbibliography:\n- 'Biblio.bib'\ntitle: Limit point buckling of a finite beam on a nonlinear foundation\n---\n\nIntroduction\n============\n\nAn elastic beam on a foundation is a model that can be found in a broad range of applications: railway tracks, buried pipelines, sandwich panels, coated solids in material, network beams, floating structures... The usual way to model the interaction between the beam and the foundation is to replace the" +"---\nabstract: 'Artificial Intelligence (AI), and in particular, the explainability thereof, has gained phenomenal attention over the last few years. Whilst we usually do not question the decision-making process of these systems in situations where only the outcome is of interest, we do however pay close attention when these systems are applied in areas where the decisions directly influence the lives of humans. It is especially noisy and uncertain observations close to the decision boundary which results in predictions which cannot necessarily be explained that may foster mistrust among end-users. This drew attention to AI methods for which the outcomes can be explained. Bayesian networks are probabilistic graphical models that can be used as a tool to manage uncertainty. The probabilistic framework of a Bayesian network allows for explainability in the model, reasoning and evidence. The use of these methods is mostly ad hoc and not as well organised as explainability methods in the wider AI research field. As such, we introduce a taxonomy of explainability in Bayesian networks. We extend the existing categorisation of explainability in the model, reasoning or evidence to include explanation of decisions. The explanations obtained from the explainability methods are illustrated by means of a" +"---\nabstract: 'We define a probabilistic programming language for Gaussian random variables with a first-class exact conditioning construct. We give operational, denotational and equational semantics for this language, establishing convenient properties like exchangeability of conditions. Conditioning on equality of continuous random variables is nontrivial, as the exact observation may have probability zero; this is *Borel\u2019s paradox*. Using categorical formulations of conditional probability, we show that the good properties of our language are not particular to Gaussians, but can be derived from universal properties, thus generalizing to wider settings. We define the Cond construction, which internalizes conditioning as a morphism, providing general compositional semantics for probabilistic programming with exact conditioning.'\nauthor:\n- \n- \nbibliography:\n- 'main.bib'\ntitle: Compositional Semantics for Probabilistic Programs with Exact Conditioning\n---\n\nIntroduction\n============\n\nProbabilistic programming is the paradigm of specifying complex statistical models as programs, and performing inference on them. There are two ways of expressing dependence on observed data, thus learning from them: *soft constraints* and *exact conditioning*. Languages like Stan [@stan] or WebPPL [@dippl] use a scoring construct for soft constraints, re-weighting program traces by observed likelihoods. Other frameworks like Hakaru [@narayanan2016probabilistic] or Infer.NET [@InferNET18] allow exact conditioning on data. In this paper we" +"---\nabstract: 'We are interested in martingale rearrangement couplings. As introduced by Wiesel [@Wi20] in order to prove the stability of Martingale Optimal Transport problems, these are projections in adapted Wasserstein distance of couplings between two probability measures on the real line in the convex order onto the set of martingale couplings between these two marginals. In reason of the lack of relative compactness of the set of couplings with given marginals for the adapted Wasserstein topology, the existence of such a projection is not clear at all. Under a barycentre dispersion assumption on the original coupling which is in particular satisfied by the Hoeffding-Fr\u00e9chet or comonotone coupling, Wiesel gives a clear algorithmic construction of a martingale rearrangement when the marginals are finitely supported and then gets rid of the finite support assumption by relying on a rather messy limiting procedure to overcome the lack of relative compactness. Here, we give a direct general construction of a martingale rearrangement coupling under the barycentre dispersion assumption. This martingale rearrangement is obtained from the original coupling by an approach similar to the construction we gave in [@JoMa18] of the inverse transform martingale coupling, a member of a family of martingale couplings close" +"---\nabstract: 'A recent line of research focuses on the study of the stochastic multi-armed bandits problem (MAB), in the case where temporal correlations of specific structure are imposed between the player\u2019s actions and the reward distributions of the arms (Kleinberg and Immorlica \\[FOCS18\\], Basu et al. \\[NeurIPS19\\]). As opposed to the standard MAB setting, where the optimal solution in hindsight can be trivially characterized, these correlations lead to (sub-)optimal solutions that exhibit interesting dynamical patterns \u2013 a phenomenon that yields new challenges both from an algorithmic as well as a learning perspective. In this work, we extend the above direction to a combinatorial bandit setting and study a variant of stochastic MAB, where arms are subject to matroid constraints and each arm becomes unavailable (blocked) for a fixed number of rounds after each play. A natural common generalization of the state-of-the-art for blocking bandits, and that for matroid bandits, yields a $(1-\\frac{1}{e})$-approximation for partition matroids, yet it only guarantees a $\\frac{1}{2}$-approximation for general matroids. In this paper we develop new algorithmic ideas that allow us to obtain a polynomial-time $(1 - \\frac{1}{e})$-approximation algorithm (asymptotically and in expectation) for any matroid, and thus to control the $(1-\\frac{1}{e})$-approximate regret. A key" +"---\nabstract: 'This paper presents several approaches to deal with the problem of identifying muons in a water Cherenkov detector with a reduced water volume and 4 PMTs. Different perspectives of information representation are used and new features are engineered using the specific domain knowledge. As results show, these new features, in combination with the convolutional layers, are able to achieve a good performance avoiding overfitting and being able to generalise properly for the test set. The results also prove that the combination of state-of-the-art Machine Learning analysis techniques and water Cherenkov detectors with low water depth can be used to efficiently identify muons, which may lead to huge investment savings due to the reduction of the amount of water needed at high altitudes. This achievement can be used in further research to be able to discriminate between gamma and hadron induced showers using muons as discriminant.'\naddress:\n- 'Laborat\u00f3rio de Instrumenta\u00e7\u00e3o e F\u00edsica Experimental de Part\u00edculas (LIP) - Lisbon, Av. Prof. Gama Pinto 2, 1649-003 Lisbon, Portugal and Instituto Superior T\u00e9cnico (IST), Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisbon, Portugal '\n- 'Computer Architecture and Technology Department, University of Granada, Granada, Spain'\nauthor:\n- 'B.S. Gonz\u00e1lez'\n-" +"---\nabstract: 'In this paper we present a right version of the Buchberger algorithm over skew Poincar\u00e9-Birkhoff-Witt extensions (skew PBW extensions for short) defined by Gallego and Lezama [@LezamaGallego]. This algorithm is an adaptation of the left case given in [@Fajardo3]. In particular, we developed a right version of the division algorithm and from this we built the right Gr\u00f6bner bases theory over bijective skew $PBW$ extensions. The algorithms were implemented in the SPBWE library developed in , this paper includes an application of these to the membership problem. The theory developed here is fundamental to complete the `SPBWE` library and thus be able to implement various homological applications that arise as result of obtaining the right Gr\u00f6bner bases over skew $PBW$ extensions.'\nauthor:\n- |\n William Fajardo[^1]\\\n Seminario de \u00c1lgebra Constructiva - $\\text{SAC}^2$\\\n Departamento de Matem\u00e1ticas\\\n Universidad Nacional de Colombia, Bogot\u00e1, Colombia\\\n wafajardoc@unal.edu.co\ntitle: Right Buchberger Algorithm over Bijective Skew $PBW$ Extensions\n---\n\nNon-commutative computational algebra, skew $PBW$ extensions, Buchberger algorithm, Gr\u00f6bner bases, `SPBWE` library, [Maple]{}.\\\n*Mathematics Subject Classification.* 2021: Primary: 16Z05. Secondary: 16D40, 15A21.\n\nSkew $PBW$ extensions\n=====================\n\nIn this section we introduce the *bijective skew $PBW$ extensions* whose are the fundamental topic in this paper. Skew $PBW$" +"---\nabstract: 'The main motivation of this work is to assess the validity of a LWR traffic flow model to model measurements obtained from trajectory data, and propose extensions of this model to improve it. A formulation for a discrete dynamical system is proposed aiming at reproducing the evolution in time of the density of vehicles along a road, as observed in the measurements. This system is formulated as a chemical reaction network where road cells are interpreted as compartments, the transfer of vehicles from one cell to the other is seen as a chemical reaction between adjacent compartment and the density of vehicles is seen as a concentration of reactant. Several degrees of flexibility on the parameters of this system, which basically consist of the reaction rates between the compartments, can be considered: a constant value or a function depending on time and/or space. Density measurements coming from trajectory data are then interpreted as observations of the states of this system at consecutive times. Optimal reaction rates for the system are then obtained by minimizing the discrepancy between the output of the system and the state measurements. This approach was tested both on simulated and real data, proved successful" +"---\nabstract: 'Preferences often change\u2014even in short time intervals\u2014due to either the mere passage of time (present-biased preferences) or changes in environmental conditions (state-dependent preferences). On the basis of the empirical findings in the context of state-dependent preferences, we critically discuss the Aristotelian view of unitary decision makers in economics and urge a more Heraclitean perspective on human decision-making. We illustrate that the conceptualization of preferences as present-biased or state-dependent has very different normative implications under the Aristotelian view, although both concepts are empirically hard to distinguish. This is highly problematic, as it renders almost any paternalistic intervention justifiable.'\nauthor:\n- 'Sebastian Kr\u00fcgel[^1]'\n- 'Matthias Uhl[^2]'\nbibliography:\n- 'empathygap.bib'\ntitle: 'The Behavioral Economics of Intrapersonal Conflict: A Critical Assessment'\n---\n\n*JEL classification: D01; D90; D91*\n\n*Keywords: Intrapersonal conflict; State-dependent preferences; Projection bias; Paternalism*\n\nIntroduction\n============\n\nAccording to standard economics, rational actors rank present choices according to preferences over the causal consequences of actions. Causality implies the lapse of time, and preferences may change between the times of choice and of consequence. Many years of research in economics and psychology have taught us that changes in preferences between these two points in time may be the rule rather than the exception." +"---\nabstract: 'The nearest active radio galaxy Centaurus (Cen) A is a gamma-ray emitter in GeV to TeV energy scale. The High Energy Stereoscopic System (H.E.S.S.) and non-simultaneous Fermi-LAT observation indicate an unusual spectral hardening above few GeV energies in the gamma-ray spectrum of Cen A. Very recently the H.E.S.S. observatory resolved the kilo parsec (kpc)-scale jets in Centaurus A at TeV energies. On the other hand, the Pierre Auger Observatory (PAO) detects a few ultra high energy cosmic ray (UHECR) events from Cen-A. The proton blazar inspired model, which considers acceleration of both electrons and hadronic cosmic rays in AGN jet, can explain the observed coincident high energy neutrinos and gamma rays from Ice-cube detected AGN jets. Here we have employed the proton blazar inspired model to explain the observed GeV to TeV gamma-ray spectrum features including the spectrum hardening at GeV energies along with the PAO observation on cosmic rays from Cen-A. Our findings suggest that the model can explain consistently the observed electromagnetic spectrum in combination with the appropriate number of UHECRs from Cen A.'\nauthor:\n- 'Prabir Banik$^{1,2}$[^1], Arunava Bhadra$^{2}$[^2] and Abhijit Bhattacharyya$^{3}$[^3]'\ntitle: 'Interpreting correlated observations of cosmic rays and gamma-rays from Centaurus A with" +"---\nabstract: 'In this paper, we quantify the rate of convergence between the distribution of number of zeros of random trigonometric polynomials (RTP) with i.i.d. centered random coefficients and the number of zeros of a stationary centered Gaussian process $G$, whose covariance function is given by the sinc function. First, we find the convergence of the RTP towards $G$ in the Wasserstein$-1$ distance, which in turn is a consequence of Donsker Theorem. Then, we use this result to derive the rate of convergence between their respective number of zeros. Since the number of real zeros of the RTP is not a continuous function, we use the Kac-Rice formula to express it as the limit of an integral and, in this way, we approximate it by locally Lipschitz continuous functions.'\naddress:\n- 'Institut de math\u00e9matiques de Toulouse, Universit\u00e9 Paul Sabatier, 118, route de Narbonne F-31062 Toulouse cedex 9, France'\n- 'Centro de Investigaci\u00f3n en Matem\u00e1ticas, UAEH, Carretera Pachuca-Tulancingo km 4.5 Pachuca, Hidalgo 42184, Mexico'\nauthor:\n- Laure Coutin\n- Liliana Peralta\nbibliography:\n- 'references.bib'\ntitle: '[**Rates of convergence for the number of zeros of random trigonometric polynomials**]{}'\n---\n\n[^1]\n\nIntroduction\n============\n\nThe behavior of zeros of random polynomials has been studied" +"---\nabstract: 'Gamma-ray bursts (GRBs) are powered by relativistic jets that exhibit intermittency over a broad range of timescales - from $ \\sim $ ms to seconds. Previous numerical studies have shown that hydrodynamic (i.e., unmagnetized) jets that are expelled from a variable engine are subject to strong mixing of jet and cocoon material, which strongly inhibits the GRB emission. In this paper we conduct 3D RMHD simulations of mildly magnetized jets with power modulation over durations of 0.1 s and 1 s, and a steady magnetic field at injection. We find that when the jet magnetization at the launching site is $\\sigma \\sim 0.1$, the initial magnetization is amplified by shocks formed in the flow to the point where it strongly suppresses baryon loading. We estimate that a significant contamination can be avoided if the magnetic energy at injection constitutes at least a few percent of the jet energy. The variability timescales of the jet after it breaks out of the star are then governed by the injection cycles rather than by the mixing process, suggesting that in practice jet injection should fluctuate on timescales as short as $ \\sim 10 $ ms in order to account for the" +"---\nabstract: 'Designing reward functions for reinforcement learning is difficult: besides specifying which behavior is rewarded for a task, the reward also has to discourage undesired outcomes. Misspecified reward functions can lead to unintended negative side effects, and overall unsafe behavior. To overcome this problem, recent work proposed to augment the specified reward function with an impact regularizer that discourages behavior that has a big impact on the environment. Although initial results with impact regularizers seem promising in mitigating some types of side effects, important challenges remain. In this paper, we examine the main current challenges of impact regularizers and relate them to fundamental design decisions. We discuss in detail which challenges recent approaches address and which remain unsolved. Finally, we explore promising directions to overcome the unsolved challenges in preventing negative side effects with impact regularizers.'\nauthor:\n- 'David Lindner,^1^[^1] Kyle Matoba, ^2^ Alexander Meulemans ^3^\\'\nbibliography:\n- 'references.bib'\ntitle: Challenges for Using Impact Regularizers to Avoid Negative Side Effects\n---\n\nIntroduction {#sec:introduction}\n============\n\nSpecifying a reward function in reinforcement learning (RL) that completely aligns with the designer\u2019s intent is a difficult task. Besides specifying what is important to solve the task at hand, the designer also needs to" +"---\nabstract: 'We explore the interplay of electron-electron correlations and surface effects in the prototypical correlated insulating material, NiO. In particular, we compute the electronic structure, magnetic properties, and surface energies of the $(001)$ and $(110)$ surfaces of paramagnetic NiO using a fully charge self-consistent DFT+dynamical mean-field theory method. Our results reveal a complex interplay between electronic correlations and surface effects in NiO, with the electronic structure of the $(001)$ and $(110)$ NiO surfaces being significantly different from that in bulk NiO. We obtain a sizeable reduction of the band gap at the surface of NiO, which is most significant for the $(110)$ NiO surface. This suggests a higher catalytic activity of the $(110)$ NiO surface than that of the $(001)$ NiO one. Our results reveal a charge-transfer character of the $(001)$ and $(110)$ surfaces of NiO. Most notably, for the $(110)$ NiO surface we observe a remarkable electronic state characterized by an alternating charge-transfer and Mott-Hubbard character of the band gap in the surface and subsurface NiO layers, respectively. This novel form of electronic order stabilized by strong correlations is not driven by lattice reconstructions but of purely electronic origin. We notice the importance of orbital-differentiation of the Ni" +"---\nabstract: 'It has been hypothesized that the most likely atomic rearrangement mechanism during grain boundary (GB) migration is the one that minimizes the lengths of atomic displacements in the dichromatic pattern. In this work, we recast the problem of atomic displacement minimization during GB migration as an optimal transport (OT) problem. Under the assumption of a small potential energy barrier for atomic rearrangement, the principle of stationary action applied to GB migration is reduced to the determination of the Wasserstein metric for two point sets. In order to test the minimum distance hypothesis, optimal displacement patterns predicted on the basis of a regularized OT based forward model are compared to molecular dynamics (MD) GB migration data for a variety of GB types and temperatures. Limits of applicability of the minimum distance hypothesis and interesting consequences of the OT formulation are discussed in the context of MD data analysis for twist GBs, general $\\Sigma 3$ twin boundaries and a tilt GB that exhibits shear coupling. The forward model may be used to predict atomic displacement patterns for arbitrary disconnection modes and a variety of metastable states, facilitating the analysis of multimodal GB migration data.'\naddress:\n- 'Department of Materials Science" +"---\nabstract: 'Narrowband and broadband indoor radar images significantly deteriorate in the presence of target dependent and independent static and dynamic clutter arising from walls. A stacked and sparse denoising autoencoder (StackedSDAE) is proposed for mitigating wall clutter in indoor radar images. The algorithm relies on the availability of clean images and corresponding noisy images during training and requires no additional information regarding the wall characteristics. The algorithm is evaluated on simulated Doppler-time spectrograms and high range resolution profiles generated for diverse radar frequencies and wall characteristics in around-the-corner radar (ACR) scenarios. Additional experiments are performed on range-enhanced frontal images generated from measurements gathered from a wideband RF imaging sensor. The results from the experiments show that the StackedSDAE successfully reconstructs images that closely resemble those that would be obtained in free space conditions. Further, the incorporation of sparsity and depth in the hidden layer representations within the autoencoder makes the algorithm more robust to low signal to noise ratio (SNR) and label mismatch between clean and corrupt data during training than the conventional single layer DAE. For example, the denoised ACR signatures show a structural similarity above 0.75 to clean free space images at SNR of $-10dB$ and label" +"---\nabstract: 'We prove the existence and uniqueness of solutions to a class of quadratic BSDE systems which we call triangular quadratic. Our results generalize several existing results about diagonally quadratic BSDEs in the non-Markovian setting. As part of our analysis, we obtain new results about linear BSDEs with unbounded coefficients, which may be of independent interest. Through a non-uniqueness example, we answer a \u201ccrucial open question\" raised by Harter and Richou by showing that the stochastic exponential of an $n \\times n$ matrix-valued ${\\text{BMO}}$ martingale need not satisfy a reverse H\u00f6lder inequality.'\nauthor:\n- 'Joe Jackson and Gordan [\u017d]{}itkovi['' c]{}'\nbibliography:\n- 'qtdrivers.bib'\ntitle: 'Existence and Uniqueness for non-Markovian Triangular Quadratic BSDEs'\n---\n\nIntroduction\n============\n\nBackward stochastic differential equations\n------------------------------------------\n\nA backward stochastic differential equation (BSDE) is an expression of the form\n\n$$\\begin{aligned}\n \\label{introbsde}\n Y = \\xi + \\int_{\\cdot}^T f( \\cdot, Y, {\\boldsymbol{Z}}) dt - \\int_{\\cdot}^T {\\boldsymbol{Z}}d {\\boldsymbol{B}}. \\end{aligned}$$\n\nHere ${\\boldsymbol{B}}$ is a $d$-dimensional Brownian, $f = f(t, \\omega, y, {\\boldsymbol{z}}) : [0,T] \\times \\Omega \\to {}^n \\times ({}^d)^n \\to {}^n$ is a random field called the driver with various measurability and continuity constraints, and $\\xi$ is an $n$-dimensional random vector called the terminal condition which is measurable with" +"---\nabstract: 'Density matrix quantum Monte Carlo (DMQMC) is a recently-developed method for stochastically sampling the $N$-particle thermal density matrix to obtain exact-on-average energies for model and *ab initio* systems. We report a systematic numerical study of the sign problem in DMQMC based on simulations of atomic and molecular systems. In DMQMC, the density matrix is written in an outer product basis of Slater determinants and has a size of space which is the square of the number of Slater determinants. In principle this means DMQMC needs to sample a space which scales in the system size, $N$, as $\\mathcal{O}[(\\exp(N))^2]$. In practice, there is a system-dependent critical walker population ($N_c$) which must be exceeded in order to remove the sign problem, and this imposes limitations by way of storage and computer time. We establish that $N_c$ for DMQMC is the square of $N_c$ for FCIQMC. By contrast, the minimum $N_c$ in the interaction picture modification of DMQMC (IP-DMQMC) only is directly proportionate to the $N_c$ for FCIQMC. We find that this comes from the asymmetric propagation of IP-DMQMC compared to the symmetric propagation of canonical DMQMC. An asymmetric mode of propagation is prohibitively expensive for DMQMC because it has a" +"---\nauthor:\n- |\n Matteo Macchini, *Student Member, IEEE*, Manana Lortkipanidze, Fabrizio Schiano, *Member, IEEE*,\\\n and Dario Floreano, *Senior Member, IEEE* [^1]\nbibliography:\n- 'bib/alias.bib'\n- 'bib/IEEEConfAbrv.bib'\n- 'bib/IEEEabrv.bib'\n- 'bib/otherAbrv.bib'\n- 'bib/bibCustom.bib'\ntitle: |\n The Impact of Virtual Reality and Viewpoints\\\n in [Body Motion Based Drone Teleoperation]{}\n---\n\nIntroduction\n============\n\nTelerobotic systems are needed in many fields in which human cognition and decision-making capacities are still crucial to accomplish a mission [@gibo_shared_2016]. Such fields include but are not limited to navigation in challenging and unstructured environments, search and rescue missions, and minimally invasive surgery [@diftler_robonaut_2011; @khatib_ocean_2016; @murphy_search_2008; @bodner_first_2004]. To provide fine control of the telerobotic system, the implementation of an efficient Human-Robot Interface (HRI) is crucial. Most telerobotic applications are currently restricted to a small set of experts who need to undergo long training processes to gain experience and expertise in the task [@chen_human_2007; @casper_human-robot_2003]. With the fast advancements in the field of robotics, new systems require control interfaces that are sufficiently powerful and intuitive also for inexperienced users [@peschel_humanmachine_2013].\n\nBody-Machine Interfaces (BoMIs) are the subdomain of HRIs that consist of the acquisition and processing of body signals for the generation of control inputs for the telerobotic system [@casadio_body-machine_2012]." +"---\nabstract: 'Demand forecasting is one of the fundamental components of a successful revenue management system. This paper provides a new model, which is inspired by cubic smoothing splines, resulting in smooth demand curves per rate class over time until the check-in date.This model makes a trade-off between the forecasting error and the smoothness of the fit, and is therefore able to capture natural guest behavior. The model is tested on hospitality data. We also implemented an optimization module, and computed the expected improvement using our forecast and the optimal pricing policy. Using data of four properties from a major hotel chain, between 2.9 and 10.2% more revenue is obtained than using the heuristic pricing done by the hotels.'\nauthor:\n- |\n Rik van Leeuwen$^{1}$ $\\bullet$ Ger Koole$^{2}$\\\n `^{1}Ireckonu, Olympisch Stadion 43, 1076DE, Amsterdam The Netherlands`\\\n `^{2}Department of Mathematics, Vrije Universiteit, De Boelelaan 1111, 1081HV Amsterdam, The Netherlands`\\\n `^{1}rik@ireckonu.com \\bullet ^{2}ger.koole@vu.nl`\\\nbibliography:\n- 'bibtex.bib'\nnocite: '[@1; @2; @3; @4; @5; @7; @8; @9; @10; @11; @12; @13; @14; @15; @16; @17; @18; @19; @20]'\ntitle: Demand Forecasting in Hospitality Using Smoothed Demand Curves \n---\n\n[ **Keywords\u2014** Revenue Management, Forecasting, Cubic Smoothing Splines ]{}\n\nIntroduction {#sec: intro}\n============\n\nIn hospitality, revenue" +"---\nabstract: 'Two vertices $u, v \\in V$ of an undirected connected graph $G=(V,E)$ are [*resolved*]{} by a vertex $w$ if the distance between $u$ and $w$ and the distance between $v$ and $w$ are different. A set $R \\subseteq V$ of vertices is a [*$k$-resolving set*]{} for $G$ if for each pair of vertices $u, v \\in V$ there are at least $k$ distinct vertices $w_1,\\ldots,w_k \\in R$ such that each of them resolves $u$ and $v$. The [*$k$-Metric Dimension*]{} of $G$ is the size of a smallest $k$-resolving set for $G$. The decision problem [$k$-Metric Dimension]{} is the question whether G has a $k$-resolving set of size at most $r$, for a given graph $G$ and a given number $r$. In this paper, we proof the NP-completeness of [$k$-Metric Dimension]{} for bipartite graphs and each $k \\geq 2$.'\nauthor:\n- Yannick Schmitz\n- Duygu Vietz\n- Egon Wanke\ntitle: 'A note on the complexity of [k-metric dimension]{}'\n---\n\nIntroduction\n============\n\nThe metric dimension of graphs has been introduced in the 1970s independently by Slater [@Sla75] and by Harary and Melter [@HM76]. We consider simple undirected and connected graphs $G=(V,E)$, where $V$ is the set of vertices and $E" +"---\nabstract: 'The flip-flop qubit, encoded in the states with antiparallel donor-bound electron and donor nuclear spins in silicon, showcases long coherence times, good controllability, and, in contrast to other donor-spin-based schemes, long-distance coupling. Electron spin control near the interface, however, is likely to shorten the relaxation time by many orders of magnitude, reducing the overall qubit quality factor. Here, we theoretically study the multilevel system that is formed by the interacting electron and nuclear spins and derive analytical effective two-level Hamiltonians with and without periodic driving. We then propose an optimal control scheme that produces fast and robust single-qubit gates in the presence of low-frequency noise without relying on parametrically restrictive sweet spots. This scheme increases considerably both the relaxation time and the qubit quality factor.'\nauthor:\n- 'F.\u00a0A.\u00a0Calderon-Vargas'\n- Edwin\u00a0Barnes\n- 'Sophia\u00a0E.\u00a0Economou'\nbibliography:\n- 'library.bib'\ntitle: 'Fast high-fidelity single-qubit gates for flip-flop qubits in silicon'\n---\n\nIntroduction {#sec: Intro}\n============\n\nQuantum computation promises to revolutionize the scientific world, from fundamental science to information technology\u00a0[@Nielsen2010]. In the ongoing race to build the first fully operational quantum computer, donor spin qubits in isotopically purified silicon ($^{28}\\mathrm{Si}$)\u00a0[@Itoh2014] are promising candidates due to their long" +"---\nabstract: 'This paper completely characterizes the standard Young tableaux that can be reconstructed from their sets or multisets of $1$-minors. In particular, any standard Young tableau with at least $5$ entries can be reconstructed from its set of $1$-minors.'\naddress:\n- |\n Centro de Matem\u00e1tica e Aplica\u00e7\u00f5es\\\n Faculdade de Ci\u00eancias e Tecnologia\\\n Universidade Nova de Lisboa\\\n 2829\u2013516 Caparica\\\n Portugal \n- |\n Centro de Matem\u00e1tica e Aplica\u00e7\u00f5es\\\n Faculdade de Ci\u00eancias e Tecnologia\\\n Universidade Nova de Lisboa\\\n 2829\u2013516 Caparica\\\n Portugal \nauthor:\n- 'Alan J. Cain'\n- Erkko Lehtonen\nbibliography:\n- '\\\\jobname.bib'\ntitle: Reconstructing Young Tableaux\n---\n\n[^1]\n\nIntroduction\n============\n\nReconstruction problems are a very general class of problems that ask whether a mathematical object is uniquely determined by a collection of pieces of partial information about the object. A classical example of such a problem, posed by Kelly\u00a0[@Kelly] and Ulam\u00a0[@Ulam], is present in a famous unsolved question in graph theory, the graph reconstruction conjecture, which concerns whether every finite simple graph with at least two vertices is uniquely determined, up to isomorphism, by the collection of its one-vertex-deleted induced subgraphs. Analogous reconstruction problems have been defined and studied for many kinds of mathematical objects, such as relations, posets, matrices," +"---\nabstract: 'The performance of neural network models is often limited by the availability of big data sets. To treat this problem, we survey and develop novel synthetic data generation and augmentation techniques for enhancing low/zero-sample learning in satellite imagery. In addition to extending synthetic data generation approaches, we propose a hierarchical detection approach to improve the utility of synthetic training samples. We consider existing techniques for producing synthetic imagery\u20133D models and neural style transfer\u2013as well as introducing our own adversarially trained reskinning network, the GAN-Reskinner, to blend 3D models. Additionally, we test the value of synthetic data in a two-stage, hierarchical detection/classification model of our own construction. To test the effectiveness of synthetic imagery, we employ it in the training of detection models and our two stage model, and evaluate the resulting models on real satellite images. All modalities of synthetic data are tested extensively on practical, geospatial analysis problems. Our experiments show that synthetic data developed using our approach can often enhance detection performance, particularly when combined with some real training images. When the only source of data is synthetic, our GAN-Reskinner often boosts performance over conventionally rendered 3D models and in all cases the hierarchical model outperforms" +"---\nabstract: 'We extend the work of Salberger; Walsh; Castryck, Cluckers, Dittmann and Nguyen; and Vermeulen to prove the uniform dimension growth conjecture of Heath-Brown and Serre for varieties of degree at least $4$ over global fields. As an intermediate step, we generalize the bounds of Bombieri and Pila to curves over global fields and in doing so we improve the $B^{\\varepsilon}$ factor by a $\\log(B)$ factor.'\naddress:\n- '$^{1}$Instituto Argentino de Matem\u00e1ticas Alberto P. Calder\u00f3n-CONICET, Saavedra 15, Piso 3 (1083), Buenos Aires, Argentina;'\n- '$^{2}$Departamento de Matem\u00e1tica, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Argentina.'\nauthor:\n- 'Marcelo Paredes $^{2}$ Rom\u00e1n Sasyk $^{1,2}$'\nbibliography:\n- 'paper.bib'\ntitle: Uniform bounds for the number of rational points on varieties over global fields\n---\n\nIntroduction\n============\n\nLet $X$ be a projective variety defined over a global field $K$. A central problem in diophantine geometry is to find bounds for the number of $K$-rational points in $X$ of bounded height, for some adequate height function. When $K=\\mathbb{Q}$ and $X$ is a hypersurface, perhaps the first account of such results with great generality is due to Cohen. Specifically, as a consequence of the results in [@Cohen] concerning Hilbert\u2019s irreducibility theorem, in" +"---\nabstract: 'The use of cash bail as a mechanism for detaining defendants pre-trial is an often-criticized system that many have argued violates the presumption of \u201cinnocent until proven guilty.\u201d Many studies have sought to understand both the long-term effects of cash bail\u2019s use and the disparate rate of cash bail assignments along demographic lines (race, gender, etc). However, such work is often susceptible to problems of infra-marginality \u2013 that the data we observe can only describe average outcomes, and not the outcomes associated with the marginal decision. In this work, we address this problem by creating a hierarchical Bayesian model of cash bail assignments. Specifically, our approach models cash bail decisions as a probabilistic process whereby judges balance the relative costs of assigning cash bail with the cost of defendants potentially skipping court dates, and where these skip probabilities are estimated based upon features of the individual case. We then use Monte Carlo inference to sample the distribution over these costs for different magistrates and across different races. We fit this model to a data set we have collected of over 50,000 court cases in the Allegheny and Philadelphia counties in Pennsylvania. Our analysis of 50 separate judges shows" +"---\nabstract: 'We study the statistical properties of the yielding transition in model amorphous solids in the limit of slow, athermal deformation. Plastic flow occurs via alternating phases of elastic loading punctuated by rapid dissipative events in the form of collective avalanches. We investigate their characterization through energy vs. stress drops and at multiple stages of deformation, thus revealing a change of spatial extent of the avalanches and degree of stress correlations as deformation progresses. We show that the statistics of stress and energy drops only become comparable for large events in the steady flow regime. Results for the critical exponents of the yielding transition are discussed in the context of prior studies of similar type, revealing the influence of model glass and preparation history.'\nauthor:\n- C\u00e9line Ruscher\n- J\u00f6rg Rottler\nbibliography:\n- 'biblio.bib'\ndate: 'Received: date / Accepted: date'\ntitle: 'Avalanches in the athermal quasistatic limit of sheared amorphous solids: an atomistic perspective '\n---\n\n[example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore\n\nIntroduction {#intro}\n============\n\nGranular materials, foams, metallic and colloidal glasses are yield stress materials. When subjected to deformation" +"---\nabstract: 'It is conceivable that an RNA virus could use a polysome, that is, a string of ribosomes covering the RNA strand, to protect the genetic material from degradation inside a host cell. This paper discusses how such a virus might operate, and how its presence might be detected by ribosome profiling. There are two possible forms for such a *polysomally protected virus*, depending upon whether just the forward strand or both the forward and complementary strands can be encased by ribosomes (these will be termed type 1 and type 2, respectively). It is argued that in the type 2 case the viral RNA would evolve an *ambigrammatic* property, whereby the viral genes are free of stop codons in a reverse reading frame (with forward and reverse codons aligned). Recent observations of ribosome profiles of ambigrammatic narnavirus sequences are consistent with our predictions for the type 2 case.'\naddress: |\n $^1$ Chan Zuckerberg Biohub, 499 Illinois Street, San Francisco, CA 94158, USA\\\n $^2$ School of Mathematics and Statistics, The Open University,Walton Hall, Milton Keynes, MK7 6AA, UK\nauthor:\n- 'Michael Wilkinson$^{1,2}$, David Yllanes$^1$ and Greg Huber$^1$'\nbibliography:\n- 'polyvirus.bib'\ntitle: Polysomally Protected Viruses\n---\n\nJanuary 2021\n\nIntroduction\n============\n\nA" +"---\nabstract: 'The growing energy and performance costs of deep learning have driven the community to reduce the size of neural networks by selectively pruning components. Similarly to their biological counterparts, sparse networks generalize just as well, if not better than, the original dense networks. Sparsity can reduce the memory footprint of regular networks to fit mobile devices, as well as shorten training time for ever growing networks. In this paper, we survey prior work on sparsity in deep learning and provide an extensive tutorial of sparsification for both inference and training. We describe approaches to remove and add elements of neural networks, different training strategies to achieve model sparsity, and mechanisms to exploit sparsity in practice. Our work distills ideas from more than 300 research papers and provides guidance to practitioners who wish to utilize sparsity today, as well as to researchers whose goal is to push the frontier forward. We include the necessary background on mathematical methods in sparsification, describe phenomena such as early structure adaptation, the intricate relations between sparsity and the training process, and show techniques for achieving acceleration on real hardware. We also define a metric of pruned parameter efficiency that could serve as a" +"---\nabstract: 'This article proposes the first discrete-time implementation of Rydberg quantum walk in multi-dimensional spatial space that could ideally simulate different classes of topological insulators. Using distance-selective exchange-interaction between Rydberg excited atoms in an atomic-array with dual lattice-constants, the new setup operates both coined and coin-less models of discrete-time quantum walk (DTQW). Here, complicated coupling tessellations are performed by global laser that exclusively excite the site at the anti-blockade region. The long-range interaction provides a new feature of designing different topologically ordered periodic boundary conditions. Limiting the Rydberg population to two excitations, coherent QW over hundreds of lattice sites and steps are achievable with the current technology. These features would improve the performance of this quantum machine in running the quantum search algorithm over topologically ordered databases as well as diversifying the range of topological insulators that could be simulated.'\nauthor:\n- Mohammadsadegh Khazali\ntitle: ' Discrete-Time Quantum-Walk & Floquet Topological Insulators via Distance-Selective Rydberg-Interaction'\n---\n\nIntroduction\n============\n\nThere is a significant effort in making quantum hardwares that outperform classical counterparts in performing certain algorithms and simulating other complicated quantum systems. Among different approaches, implementing the quantum walk (QW) [@Aha93; @Far98; @Kem03] receives wide interest. Unlike classical random walk," +"---\nabstract: 'SPIRAL 2 is a high intensity heavy ions beams accelerator project that has been going on for more than 10 years now. Countless efforts in different disciplines made it what it is today. One of the most important steps after the set up of the different equipments has been the very first full cool down of the superconducting cavities in an accelerator operation type configuration. While this has been a major achievement for the SPIRAL 2 teams, it also hi-lighted new challenges and constraints that would have to be addressed in order to have a high availability rate of the beam from the cryogenics side. This paper retraces this particular episode.'\nauthor:\n- Adnan Ghribi\n- Muhammad Aburas\n- Yoann Baumont\n- 'Pierre-Emmanuel Bernaudin'\n- St\u00e9phane Bonneau\n- Guillaume Duteil\n- Robin Ferdinand\n- Michel Lechartier\n- 'Jean-Fran\u00e7ois Leyge'\n- Guillaume Lescali\u00e9\n- Yann Thivel\n- Arnaud Trudel\n- Laurent Valentin\n- Adrien Vassal\nbibliography:\n- 'ref.bib'\ntitle: First full cool down of the SPIRAL 2 superconducting LINAC\n---\n\n\\[sec1\\]Introduction\n====================\n\nThe GANIL\u2019s (Grand Acc\u00e9l\u00e9rateur National d\u2019Ions Lourds) SPIRAL2 heavy ions accelerator[@Gales:2011he; @Lewitowicz:2006fx; @Bertrand:2007tp; @Petit:2011ub; @Ferdinand:2010ty] aims at delivering some of the highest intensities of rare isotope beams" +"---\nabstract: 'It is an essential technique for the moving user nodes (UNs) with clock offset and clock skew to resolve the joint localization and synchronization (JLAS) problem. Existing iterative maximum likelihood methods using sequential one-way time-of-arrival (TOA) measurements from the anchor nodes\u2019 (AN) broadcast signals require a good initial guess and have a computational complexity that grows with the number of iterations, given the size of the problem. In this paper, we propose a new closed-form JLAS approach, namely CFJLAS, which achieves the asymptotically optimal solution in one shot without initialization when the noise is small, and has a low computational complexity. After squaring and differencing the sequential TOA measurement equations, we devise two intermediate variables to reparameterize the non-linear problem. In this way, we convert the problem to a simpler one of solving two simultaneous quadratic equations. We then solve the equations analytically to obtain a raw closed-form JLAS estimation. Finally, we apply a weighted least squares (WLS) step to optimize the estimation. We derive the Cram\u00e9r-Rao lower bound (CRLB), analyze the estimation error, and show that the estimation accuracy of the CFJLAS reaches the CRLB under the small noise condition. The complexity of the new CFJLAS is" +"---\nabstract: 'The phenomenon of entanglement marks one of the furthest departures from classical physics and is indispensable for quantum information processing. Despite its fundamental importance, the distribution of entanglement over long distances through photons is unfortunately hindered by unavoidable decoherence effects. Entanglement distillation is a means of restoring the quality of such diluted entanglement by concentrating it into a pair of qubits. Conventionally, this would be done by distributing multiple photon pairs and distilling the entanglement into a single pair. Here, we turn around this paradigm by utilising pairs of single photons entangled in multiple degrees of freedom. Specifically, we make use of the polarisation and the energy-time domain of photons, both of which are extensively field-tested. We experimentally chart the domain of distillable states and achieve relative fidelity gains up to . Compared to the two-copy scheme, the distillation rate of our single-copy scheme is several orders of magnitude higher, paving the way towards high-capacity and noise-resilient quantum networks.'\nauthor:\n- Sebastian Ecker\n- Philipp Sohr\n- Lukas Bulla\n- |\n \\\n Marcus Huber\n- Martin Bohmann\n- Rupert Ursin\ntitle: 'Experimental Single-Copy Entanglement Distillation'\n---\n\nEntanglement lies at the heart of quantum physics, reflecting the quantum superposition" +"---\nabstract: 'We analyze the problem of quadrangulating a $n$-sided patch, each side at its boundary subdivided into a given number of edges, using a single irregular vertex (or none, when $n = 4$) that breaks the otherwise fully regular lattice. We derive, in an analytical closed-form, (1) the necessary and sufficient conditions that a patch must meet to admit this quadrangulation, and (2) a full description of the resulting tessellation(s).'\nauthor:\n- \nbibliography:\n- 'main.bib'\ntitle: 'Closed-form Quadrangulation of $n$-Sided Patches'\n---\n\nIntroduction\n============\n\nConsider a polygonal-shaped, planar region patch $P$, delimited by $n>1$ sides, each subdivided in a number of edges. Let $e_i \\in {\\mathbb{N}}$ be the number of edges found on side $i$, $i \\in [0..n-1]$.\n\nWe are interested in determining whether or not $P$ can be quad-tessellated using only one irregular vertex, of valency $n$, somewhere in the interior (even this vertex is regular when $n=4$).\n\nThis tessellation, when it exists, can also be described as the one obtained by applying one step of Catmull-Clark (CC) subdivision [@CC] to the polygon $P$, which creates $n$ quadrilateral regions, followed by a conforming, fully regular tessellation of each of these regions, each at some appropriate grid resolution. For" +"---\nabstract: 'Wireless edge caching is a popular strategy to avoid backhaul congestion in the next generation networks, where the content is cached in advance at base stations to serve redundant requests during peak congestion periods. In the edge caching data, the missing observations are inevitable due to dynamic selective popularity. Among the completion methods, the tensor-based models have been shown to be the most advantageous for missing data imputation. Also, since the observations are correlated across time, files, and base stations, in this paper, we formulate the cooperative caching with recommendations as a fourth-order tensor completion and prediction problem. Since the content library can be large leading to a large dimension tensor, we modify the latent norm-based Frank-Wolfe (FW) algorithm with towards a much lower time complexity using multi-rank updates, rather than rank-1 updates in literature. This significantly lower time computational overhead leads in developing an online caching algorithm. With MovieLens dataset, simulations show lower reconstruction errors for the proposed algorithm as compared to that of the recent FW algorithm, albeit with lower computation overhead. It is also demonstrated that the completed tensor improves normalized cache hit rates for linear prediction schemes.'\nauthor:\n- 'Navneet Garg, , and Tharmalingam" +"---\nabstract: 'Bismuth ferrite is one of the most widely studied multiferroic materials because of its large ferroelectric polarisation coexisting with magnetic order at room temperature. Using density functional theory (DFT), we identify several previously unknown polar and non-polar structures within the low-energy phase space of perovskite-structure bismuth ferrite, [BiFeO$_3$]{}. Of particular interest is a series of non-centrosymmetric structures with polarisation along one lattice vector, combined with anti-polar distortions, reminiscent of ferroelectric domains, along a perpendicular direction. We discuss possible routes to stabilising the new phases using biaxial heteroepitaxial strain or interfacial electrostatic control in heterostructures.'\nauthor:\n- 'Bastien F. Grosso'\n- 'Nicola A. Spaldin'\nbibliography:\n- 'references.bib'\ntitle: 'Prediction of new low-energy phases of [BiFeO$_3$ ]{}with large unit cell and complex tilts beyond Glazer notation'\n---\n\nIntroduction {#sec:introduction}\n============\n\nBismuth ferrite, [BiFeO$_3$]{}, is one of the few materials that combines magnetic order and ferroelectricity in the same phase at room temperature, making it one the most well-studied multiferroics. The structural ground state of [BiFeO$_3$ ]{}is a distorted $R3c$-symmetry perovskite, with anti-ferrodistortive rotations of the oxygen octahedra around the pseudo-cubic \\[111\\] axis, combined with a large polarisation ($\\sim$ 90 $\\mu$C/cm$^2$) along the \\[111\\] direction caused by the $6s^2$ lone pairs" +"---\nabstract: |\n In this paper we provide a generalization of the Douglas-Rachford splitting (DRS) for solving monotone inclusions in a real Hilbert space involving a general linear operator. The proposed method activates the linear operator separately from the monotone operators appearing in the inclusionn the simplest case when the linear operator , it reduces to classical DRS. Moreover, the weak convergence of primal-dual sequences to is guaranteed, generalizing the main result in [@svaiter]. Inspired [@gabay83], we derive a new Split-ADMM (SADMM) by applying our method to the dual of a convex optimization problem involving a linear operator which can be expressed as the composition of two linear operators. The proposed SADMM activates one linear operator implicitly and the other explicitly and we recover ADMM when the latter is set as the identity. Connections and comparisons of our theoretical results with respect to the literature are provided for the main algorithm and SADMMhe flexibility and efficiency of is illustrated via a numerical simulations a sparse minimization problem.\n\n **Keywords.** [*ADMM, convex optimization, Douglas\u2013Rachford splitting, fixed point iterations, monotone operator theory, quasinonexpansive operators, splitting algorithms.*]{}\naddress: 'Departamento de Matem\u00e1tica, Universidad T\u00e9cnica Federico Santa Mar\u00eda, Avenida Espa\u00f1a 1680, Valpara\u00edso, Chile'\nauthor:\n- 'Luis" +"---\nabstract: 'For the first time, the dielectric response of a [BaTiO${}_3$]{} thin film under an AC electric field is investigated using microsecond time-resolved X-ray absorption spectroscopy at the Ti K-edge in order to clarify correlated contributions of each constituent atom on the electronic states. Intensities of the pre-edge [$e_{\\mathrm{g}}$]{} peak and shoulder structure just below the main edge increase with an increase in the amplitude of the applied electric field, whereas that of the main peak decreases in an opposite manner. Based on the multiple scattering theory, the increase and decrease of the [$e_{\\mathrm{g}}$]{} and main peaks are simulated for different Ti off-center displacements. Our results indicate that these spectral features reflect the inter- and intra-atomic hybridization of Ti 3$d$ with O 2$p$ and Ti 4$p$, respectively. In contrast, the shoulder structure is not affected by changes in the Ti off-center displacement but is susceptible to the effect of the corner site Ba ions. This is the first experimental verification of electronic contribution of Ba to polarization reversal.'\naddress:\n- 'Graduate School of Advanced Science and Engineering, Hiroshima University, 1-3-1 Kagamiyama, Higashihiroshima, Hiroshima 739-8562, Japan'\n- 'Laboratory for Materials and Structures, Tokyo Institute of Technology, 4259-J2-19 Nagatsuta-cho, Midori-ku, Yokohama" +"---\nabstract: 'We investigate a recombination\u2013drift\u2013diffusion model coupled to Poisson\u2019s equation modelling the transport of charge within certain types of semiconductors. In more detail, we study a two-level system for electrons and holes endowed with an intermediate energy level for electrons occupying trapped states. As our main result, we establish an explicit functional inequality between relative entropy and entropy production, which leads to exponential convergence to equilibrium. We stress that our approach is applied uniformly in the lifetime of electrons on the trap level assuming that this lifetime is sufficiently small.'\naddress:\n- 'Institute of Mathematics and Scientific Computing, University of Graz, Heinrichstra\u00dfe 36, 8010 Graz, Austria'\n- 'Faculty of Mathematics, TU Dortmund University, Vogelpothsweg 87, 44227 Dortmund, Germany'\nauthor:\n- Klemens Fellner\n- Michael Kniely\nbibliography:\n- 'TrappedStatesSelfCons.bib'\ntitle: 'Uniform convergence to equilibrium for a family of drift\u2013diffusion models with trap-assisted recombination and self-consistent potential'\n---\n\nIntroduction and main results\n=============================\n\n\\[figmodel\\]\n\n(-1,0) \u2013 node\\[sloped, anchor=south\\][Energy]{} (-1,2); (0,0) \u2013 (3,0) node\\[anchor=west\\][valence band]{}; (0,1) \u2013 (3,1) node\\[anchor=west\\][*trap level*]{}; (0,2) \u2013 (3,2) node\\[anchor=west\\][conduction band]{}; (.8, 1.2) \u2013 (.8, 1.8); (1.1, 1.2) \u2013 (1.1, 1.8); (1.9, .2) \u2013 (1.9, .8); (2.2, .2) \u2013 (2.2, .8);\n\nWe consider the following PDE\u2013ODE recombination\u2013drift\u2013diffusion system" +"---\nabstract: 'We present a uniform analysis of six examples of embedded wind shock (EWS) O star X-ray sources observed at high resolution with the [*Chandra*]{} grating spectrometers. By modeling both the hot plasma emission and the continuum absorption of the soft X-rays by the cool, partially ionized bulk of the wind we derive the temperature distribution of the shock-heated plasma and the wind mass-loss rate of each star. We find a similar temperature distribution for each star\u2019s hot wind plasma, consistent with a power-law differential emission measure, $\\frac{d\\log EM}{d\\log T}$, with a slope a little steeper than -2, up to temperatures of only about $10^7$ K. The wind mass-loss rates, which are derived from the broadband X-ray absorption signatures in the spectra, are consistent with those found from other diagnostics. The most notable conclusion of this study is that wind absorption is a very important effect, especially at longer wavelengths. More than 90 per cent of the X-rays between 18 and 25 \u00c5 produced by shocks in the wind of [$\\zeta$\u00a0Pup]{} are absorbed, for example. It appears that the empirical trend of X-ray hardness with spectral subtype among O stars is primarily an absorption effect.'\nauthor:\n- |" +"---\nabstract: 'In this work, chiral anomalies and Drude enhancement in Weyl semimetals are separately discussed from a semi-classical and quantum perspective, clarifying the physics behind Weyl semimetals while avoiding explicit use of topological concepts. The intent is to provide a bridge to these modern ideas for educators, students, and scientists not in the field using the familiar language of traditional solid-state physics at the graduate or advanced undergraduate physics level.'\nauthor:\n- 'Antonio Levy$^{1}$'\n- 'Albert F. Rigosi$^{1}$'\n- 'Francois Joint$^{2}$'\n- 'Gregory S. Jenkins$^{3}$'\ntitle: 'A Non-Topological Approach to Understanding Weyl Semimetals'\n---\n\nIntroduction\n============\n\nWeyl fermions (defined below) have historically been of interest in answering fundamental questions about the universe, particularly the observation of the matter-antimatter imbalance.^1^ The family of elementary particles classified as fermions, or particles of half-integer spin, are important in the Standard Model that unifies three of the four known forces of nature. Within the model are twenty-four families of fermions. Almost all of them are massive *Dirac fermions*. Within the family of *Dirac fermions* lies a subset class known as *Weyl fermions*, the set of fermions that are massless. Those well-versed in the physics of the weak nuclear force will recall that those" +"---\nabstract: 'Interacting agent and particle systems are extensively used to model complex phenomena in science and engineering. We consider the problem of learning interaction kernels in these dynamical systems constrained to evolve on Riemannian manifolds from given trajectory data. The models we consider are based on interaction kernels depending on pairwise Riemannian distances between agents, with agents interacting locally along the direction of the shortest geodesic connecting them. We show that our estimators converge at a rate that is independent of the dimension of the state space, and derive bounds on the trajectory estimation error, on the manifold, between the observed and estimated dynamics. We demonstrate the performance of our estimator on two classical first order interacting systems: Opinion Dynamics and a Predator-Swarm system, with each system constrained on two prototypical manifolds, the $2$-dimensional sphere and the Poincar\u00e9 disk model of hyperbolic space.'\nauthor:\n- Mauro Maggioni\n- Jason Miller\n- Hongda Qiu\n- 'Ming Zhong[^1]'\nbibliography:\n- 'ref.bib'\ntitle: Learning Interaction Kernels for Agent Systems on Riemannian Manifolds\n---\n\n[ ***Keywords:*** [ ]{}]{}\n\nIntroduction {#sec:intro}\n============\n\nDynamical systems of interacting agents, where \u201cagents\u201d may represent atoms, particles, neurons, cells, animals, peoples, robots, planets, etc..., are a fundamental modeling" +"---\nabstract: 'We review recent theoretical and experimental progresses in the coherent multiple scattering of weakly interacting disordered Bose gases. These systems have allowed, in the recent years, a characterization of weak and strong localization phenomena in disorder at an unprecedented level of control. In this paper, we first discuss the main physical concepts and recent experimental achievements associated with a few emblematic \u201cmesoscopic\u201d effects in disorder like coherent back scattering, coherent forward scattering or mesoscopic echos, focusing on the context of out-of-equilibrium cold-atom setups. We then address the role of weak particle interactions and explain how, depending on their relative strength with respect to the disorder and on the time scales probed, they can give rise to a dephasing mechanism for weak localization, thermalize a non-equilibrium Bose gas or make it become a superfluid.'\naddress: 'Laboratoire Kastler Brossel, Sorbonne Universit\u00e9, CNRS, ENS-Universit\u00e9 PSL, Coll\u00e8ge de France; 4 Place Jussieu, 75004 Paris, France '\nauthor:\n- Nicolas Cherroret\n- Thibault Scoquart\n- Dominique Delande\ntitle: 'Coherent multiple scattering of out-of-equilibrium interacting Bose gases'\n---\n\nIntroduction\n============\n\nCoherent multiple scattering in solids\n--------------------------------------\n\nIn disordered conductors, the question of how interference in multiple scattering affects transport observables like the conductance has" +"---\nauthor:\n- |\n Ming Du\\\n Advanced Photon Source\\\n Argonne National Laboratory\\\n Lemont, Illinois 60439, USA\\\n mingdu@anl.gov\\\n Xiaojing Huang\\\n National Synchrotron Light Source II\\\n Brookhaven National Laboratory\\\n Upton, New York 11973, USA\\\n Chris Jacobsen\\\n Advanced Photon Source\\\n Argonne National Laboratory, Lemont, Illinois 60439, USA\\\n {Department of Physics & Astronomy, Chemistry of Life Processes Institute}\\\n Northwestern University\\\n Evanston, Illinois 60208, USA\\\n cjacobsen@anl.gov\nbibliography:\n- 'mybib.bib'\ntitle: Using a modified double deep image prior for crosstalk mitigation in multislice ptychography\n---\n\nAbstract {#abstract .unnumbered}\n========\n\nMultislice ptychography is a high-resolution microscopy technique used to image multiple separate axial planes using a single illumination direction. However, multislice ptychography reconstructions are often degraded by crosstalk, where some features on one plane erroneously contribute to the reconstructed image of another plane. Here, we demonstrate the use of a modified \u201cdouble deep image prior\u201d (DDIP) architecture in mitigating crosstalk artifacts in multislice ptychography. Utilizing the tendency of generative neural networks to produce natural images, a modified DDIP method yielded good results on experimental data. For one of the datasets, we show that using DDIP could remove the need of using additional experimental data, such as from x-ray fluorescence, to suppress the crosstalk. Our method may help" +"---\nabstract: 'We present a novel Material Point Method (MPM) discretization of surface tension forces that arise from spatially varying surface energies. These variations typically arise from surface energy dependence on temperature and/or concentration. Furthermore, since the surface energy is an interfacial property depending on the types of materials on either side of an interface, spatial variation is required for modeling the contact angle at the triple junction between a liquid, solid and surrounding air. Our discretization is based on the surface energy itself, rather than on the associated traction condition most commonly used for discretization with particle methods. Our energy based approach automatically captures surface gradients without the explicit need to resolve them as in traction condition based approaches. We include an implicit discretization of thermomechanical material coupling with a novel particle-based enforcement of Robin boundary conditions associated with convective heating. Lastly, we design a particle resampling approach needed to achieve perfect conservation of linear and angular momentum with Affine-Particle-In-Cell (APIC) [@jiang:2015:apic]. We show that our approach enables implicit time stepping for complex behaviors like the Marangoni effect and hydrophobicity/hydrophilicity. We demonstrate the robustness and utility of our method by simulating materials that exhibit highly diverse degrees of surface" +"---\nabstract: 'The traditional notion of capacity studied in the context of memoryless network communication builds on the concept of block-codes and requires that, for sufficiently large blocklength $n$, all receiver nodes simultaneously decode their required information after $n$ channel uses. In this work, we generalize the traditional capacity region by exploring communication rates achievable when some receivers are required to decode their information before others, at different predetermined times; referred here as the [*time-rate*]{} region. Through a reduction to the standard notion of capacity, we present an inner-bound on the time-rate region. The time-rate region has been previously studied and characterized for the memoryless broadcast channel (with a sole common message) under the name [*static broadcasting*]{}.'\nauthor:\n- 'Michael Langberg\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Michelle Effros [^1] [^2] [^3]'\nbibliography:\n- 'proposal.bib'\n- 'online\\_rateless.bib'\ntitle: 'Beyond Capacity: The Joint Time-Rate Region'\n---\n\nIntroduction {#sec:intro}\n============\n\nIn the context of communication over multi-source multi-terminal memoryless channels (i.e., networks), one traditionally seeks the design of communication schemes that, for a given blocklegth $n$, allow the successful decoding of source information at receiver nodes after $n$ channel uses. Roughly speaking,[^4] rate vector ${{{\\underline{R}}}}= (R_1,\\dots,R_k)$, is said to be achievable with blocklength $n$ and decoding error $\\e>0$" +"---\nabstract: 'We apply generative adversarial convolutional neural networks to the problem of style transfer to underdrawings and ghost-images in x-rays of fine art paintings with a special focus on enhancing their spatial resolution. We build upon a neural architecture developed for the related problem of synthesizing high-resolution photo-realistic image from semantic label maps. Our neural architecture achieves high resolution through a hierarchy of generators and discriminator sub-networks, working throughout a range of spatial resolutions. This [*coarse-to-fine*]{} generator architecture can increase the effective resolution by a factor of eight in each spatial direction, or an overall increase in number of pixels by a factor of 64. We also show that even just a few examples of human-generated image segmentations can greatly improve\u2014qualitatively and quantitatively\u2014the generated images. We demonstrate our method on works such as Leonardos [*Madonna of the carnation*]{} and the underdrawing in his [*Virgin of the rocks*]{}, which pose several special problems in style transfer, including the paucity of representative works from which to learn and transfer style information.'\nauthor:\n- |\n George H.\u00a0Cann, Anthony Bourached, Ryan-Rhys Griffiths, and David G.\u00a0Stork Department of Space and Climate Physics, University College London, London, UK\\\n Oxia Palus, London, UK\\\n Department" +"---\nauthor:\n- 'A.\u00a0Beglarian'\n- 'E.\u00a0Ellinger[^1]'\n- 'N.\u00a0Hau\u00dfmann'\n- 'K.\u00a0Helbing'\n- 'S.\u00a0Hickford'\n- 'U.\u00a0Naumann'\n- 'H.-W.\u00a0Ortjohann'\n- 'M.\u00a0Steidl'\n- 'J.\u00a0Wolf'\n- 'and S.\u00a0W\u00fcstling'\ntitle: Forward Beam Monitor for the KATRIN experiment\n---\n\nIntroduction {#Section:Introduction}\n============\n\nThe KATRIN experiment will improve the sensitivity of neutrino mass measurements to $m_{\\nu} =$ ( C.L.) corresponding to a discovery potential for a mass signal of $m_{\\nu} =$ \u00a0[@Osipowicz:2001; @Angrik:2005] in the most sensitive direct neutrino mass experiment to date. The neutrino mass will be derived from a precise measurement of the shape of the tritium $\\upbeta$-decay spectrum near its endpoint at $E_{0} =$ \u00a0[@PRL2019]. The source of $\\upbeta$-electrons is a *Windowless Gaseous Tritium Source* (WGTS) which has an activity of .\n\nThe layout of the KATRIN beamline [@Arenz_2016] is shown in [figure\u00a0\\[Figure:KatrinBeamline\\]]{}. The *Source and Transport Section* (STS) consists of the WGTS, the *Differential Pumping Section* (DPS), the *Cryogenic Pumping Section* (CPS), and several source monitoring and calibration systems\u00a0[@Babutzka:2012]. Along the beamline superconducting solenoids generate a magnetic field of several Tesla strength which adiabatically guides the $\\upbeta$-electrons towards the spectrometers while excess tritium is pumped out of the system. The *Spectrometer and" +"---\nabstract: 'We study both classical and quantum algorithms to solve a hard optimization problem, namely 3\u2013XORSAT on 3\u2013regular random graphs. By introducing a new quasi\u2013greedy algorithm that is not allowed to jump over large energy barriers, we show that the problem hardness is mainly due to entropic barriers. We study, both analytically and numerically, several optimization algorithms, finding that entropic barriers affect in a similar way classical local algorithms and quantum annealing. For the adiabatic algorithm, the difficulty we identify is distinct from that of tunnelling under large barriers, but does, nonetheless, give rise to exponential running (annealing) times.'\nauthor:\n- Matteo Bellitti\n- 'Federico Ricci-Tersenghi'\n- Antonello Scardicchio\nbibliography:\n- 'biblio.bib'\ntitle: Entropic barriers as a reason for hardness in both classical and quantum algorithms\n---\n\nIntroduction {#sec:introduction}\n============\n\nHard discrete optimization problems are ubiquitous in scientific disciplines and practical applications. The problem of minimizing a complex cost function (or equivalently maximizing a reward function) naturally appears in many different contexts: e.g.\u00a0in physics in the computation of ground state configurations, in statistics in the maximization of the likelihood, in machine learning in the training of artificial neural networks, and so on.\n\nAlthough real world problems have usually" +"---\nabstract: 'We propose a neural network approach to model general interaction dynamics and an adjoint based stochastic gradient descent algorithm to calibrate its parameters. The parameter calibration problem is considered as optimal control problem that is investigated from a theoretical and numerical point of view. We prove the existence of optimal controls, derive the corresponding first order optimality system and formulate a stochastic gradient descent algorithm to identify parameters for given data sets. To validate the approach we use real data sets from traffic and crowd dynamics to fit the parameters. The results are compared to forces corresponding to well-known interaction models such as the Lighthill-Whitham-Richards model for traffic and the social force model for crowd motion.'\naddress:\n- University of Mannheim\n- University of Mannheim\nauthor:\n- Simone G\u00f6ttlich\n- Claudia Totzeck\nbibliography:\n- 'biblio.bib'\n- 'referencesDissSK.bib'\n- 'BibBookChapter.bib'\ntitle: Optimal control for interacting particle systems driven by neural networks\n---\n\n[ optimal control; neural networks; parameter identification; data analysis]{}\\\n[ ]{} 34H05; 92B20; 82C32\n\nIntroduction\n============\n\nIn the recent years many models for interaction dynamics with various applications such as swarming, sheep and dogs, crowd motion, traffic and opinion dynamics have been proposed, see e.g.\u00a0[@AlbiPareschi; @Schafe2;" +"---\nabstract: 'We study the behaviour and properties of the solar wind using a 2.5D Alfv\u00e9n wave driven wind model. We first systematically compare the results of an Alfv\u00e9n wave (AW) driven wind model with a polytropic approach. Polytropic magnetohydrodynamic wind models are thermally driven, while Alfv\u00e9n waves act as additional acceleration and heating mechanisms in the Alfv\u00e9n wave driven model. We confirm that an AW-driven model is required to reproduce the observed bimodality of slow and fast solar winds. We are also able to reproduce the observed anti-correlation between the terminal wind velocity and the coronal source temperature with the AW-driven wind model. We also show that the wind properties along an eleven year cycle differ significantly from one model to the other. The AW-driven model again shows the best agreement with observational data. Indeed, solar surface magnetic field topology plays an important role in the Alfv\u00e9n wave driven wind model, as it enters directly into the input energy sources via the Poynting flux. On the other hand, the polytropic wind model is driven by an assumed pressure gradient; thus it is relatively less sensitive to the surface magnetic field topology. Finally, we note that the net torque spinning" +"---\nabstract: 'Student procrastination and cramming for deadlines are major challenges in online learning environments, with negative educational and well-being side effects. Modeling student activities in continuous time and predicting their next study time are important problems that can help in creating personalized timely interventions to mitigate these challenges. However, previous attempts on dynamic modeling of student procrastination suffer from major issues: they are unable to predict the next activity times, cannot deal with missing activity history, are not personalized, and disregard important course properties, such as assignment deadlines, that are essential in explaining the cramming behavior. To resolve these problems, we introduce a new personalized stimuli-sensitive Hawkes process model (SSHP), by jointly modeling all student-assignment pairs and utilizing their similarities, to predict students\u2019 next activity times even when there are no historical observations. Unlike regular point processes that assume a constant external triggering effect from the environment, we model three dynamic types of external stimuli, according to assignment availabilities, assignment deadlines, and each student\u2019s time management habits. Our experiments on two synthetic datasets and two real-world datasets show a superior performance of future activity prediction, comparing with state-of-the-art models. Moreover, we show that our model achieves a flexible and" +"---\nabstract: 'In neural machine translation (NMT), monolingual data in the target language are usually exploited through a method so-called \u201cback-translation\u201d to synthesize additional training parallel data. The synthetic data have been shown helpful to train better NMT, especially for low-resource language pairs and domains. Nonetheless, large monolingual data in the target domains or languages are not always available to generate large synthetic parallel data. In this work, we propose a new method to generate large synthetic parallel data leveraging very small monolingual data in a specific domain. We fine-tune a pre-trained GPT-2 model on such small in-domain monolingual data and use the resulting model to generate a large amount of synthetic in-domain monolingual data. Then, we perform back-translation, or forward translation, to generate synthetic in-domain parallel data. Our preliminary experiments on three language pairs and five domains show the effectiveness of our method to generate fully synthetic but useful in-domain parallel data for improving NMT in all configurations. We also show promising results in extreme adaptation for personalized NMT.'\nauthor:\n- |\n Benjamin MarieAtsushi Fujita\\\n National Institute of Information and Communications Technology\\\n 3-5 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0289, Japan\\\n [{bmarie, atsushi.fujita}@nict.go.jp]{}\nbibliography:\n- 'acl2020.bib'\ntitle: Synthesizing Monolingual Data for" +"---\nabstract: 'In this paper, we propose multi-input multi-output (MIMO) beamforming designs towards joint radar sensing and multi-user communications. We employ the Cram\u00e9r-Rao bound (CRB) as a performance metric of target estimation, under both point and extended target scenarios. We then propose minimizing the CRB of radar sensing while guaranteeing a pre-defined level of signal-to-interference-plus-noise ratio (SINR) for each communication user. For the single-user scenario, we derive a closed form for the optimal solution for both cases of point and extended targets. For the multi-user scenario, we show that both problems can be relaxed into semidefinite programming by using the semidefinite relaxation approach, and prove that the global optimum can always be obtained. Finally, we demonstrate numerically that the globally optimal solutions are reachable via the proposed methods, which provide significant gains in target estimation performance over state-of-the-art benchmarks.'\nauthor:\n- |\n Fan Liu,\u00a0 Ya-Feng Liu,\u00a0 Ang Li,\u00a0\\\n Christos Masouros,\u00a0 and\u00a0Yonina C. Eldar,\u00a0[^1] [^2] [^3] [^4] [^5]\nbibliography:\n- 'IEEEabrv.bib'\n- 'JRC\\_REF.bib'\ntitle: 'Cram\u00e9r-Rao Bound Optimization for Joint Radar-Communication Design'\n---\n\nDual-functional radar-communication, joint beamforming, Cram\u00e9r-Rao bound, semidefinite relaxation, successive convex approximation.\n\nIntroduction\n============\n\nsensors and communication systems have shaped modern society in profound ways. 5G and" +"---\nabstract: 'We present a new method by which the total masses of galaxies including dark matter can be estimated from the kinematics of their globular cluster systems (GCSs). In the proposed method, we apply the convolutional neural networks (CNNs) to the two-dimensional (2D) maps of line-of-sight-velocities ($V$) and velocity dispersions ($\\sigma$) of GCSs predicted from numerical simulations of disk and elliptical galaxies. In this method, we first train the CNN using either only a larger number ($\\sim 200,000$) of the synthesized 2D maps of $\\sigma$ (\u201cone-channel\u201d) or those of both $\\sigma$ and $V$ (\u201ctwo-channel\u201d). Then we use the CNN to predict the total masses of galaxies (i.e., test the CNN) for the totally unknown dataset that is not used in training the CNN. The principal results show that overall accuracy for one-channel and two-channel data is 97.6% and 97.8% respectively, which suggests that the new method is promising. The mean absolute errors (MAEs) for one-channel and two-channel data are 0.288 and 0.275 respectively, and the value of root mean square errors (RMSEs) are 0.539 and 0.51 for one-channel and two-channel respectively. These smaller MAEs and RMSEs for two-channel data (i.e., better performance) suggest that the new method can properly" +"---\nabstract: 'In the classical theory, a famous by-product of the continued fraction expansion of quadratic irrational numbers $\\sqrt{D}$ is the solution to Pell\u2019s equation for $D$. It is well-known that, once an integer solution to Pell\u2019s equation exists, we can use it to generate all other solutions $(u_n,v_n)_{n\\in{\\mathbb{Z}}}$. Our object of interest is the polynomial version of Pell\u2019s equation, where the integers are replaced by polynomials with complex coefficients. We then investigate the factors of $v_n(t)$. In particular, we show that over the complex polynomials, there are only finitely many values of $n$ for which $v_n(t)$ has a repeated root. Restricting our analysis to ${\\mathbb{Q}}[t]$, we give an upper bound on the number of \u201cnew\u201d factors of $v_n(t)$ of degree at most $N$. Furthermore, we show that all \u201cnew\u201d linear rational factors of $v_n(t)$ can be found when $n\\leq 3$, and all \u201cnew\u201d quadratic rational factors when $n\\leq 6$.'\nauthor:\n- |\n Nikoleta Kalaydzhieva[^1]\\\n *University College London*\nbibliography:\n- 'Arxiv-reproots.bib'\ntitle: 'Properties of solutions to Pell\u2019s equation over the polynomial ring[^2]'\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nPell\u2019s equation is defined to be $$\\begin{aligned}\n\\label{nPell}\n x^2-Dy^2=1,\\end{aligned}$$ and classically solved in positive integers $x=u,\\ y=v$, for a given non-zero positive integer" +"---\nabstract: 'An analytic method is proposed to compute the surface energy and elementary excitations of the XXZ spin chain with generic non-diagonal boundary fields. For the gapped case, in some boundary parameter regimes the contributions of the two boundary fields to the surface energy are non-additive. Such a correlation effect between the two boundaries also depends on the parity of the site number $N$ even in the thermodynamic limit $N\\to\\infty$. For the gapless case, contributions of the two boundary fields to the surface energy are additive due to the absence of long-range correlation in the bulk. Although the $U(1)$ symmetry of the system is broken, exact spinon-like excitations, which obviously do not carry spin-$\\frac12$, are observed. The present method provides an universal procedure to deal with quantum integrable systems either with or without $U(1)$ symmetry.'\nauthor:\n- Yi Qiao\n- Junpeng Cao\n- 'Wen-Li Yang'\n- Kangjie Shi\n- Yupeng Wang\ntitle: 'Exact surface energy and helical spinons in the XXZ spin chain with arbitrary non-diagonal boundary fields'\n---\n\n[^1]\n\n[^2]\n\nQuantum integrable systems with generic non-diagonal boundary fields have attracted a lot of attentions since their important applications in high energy physics [@Ber05], open string/gauge theory [@Sch06; @Bei12;" +"---\nabstract: 'Data scientists face a steep learning curve in understanding a new domain for which they want to build machine learning (ML) models. While input from domain experts could offer valuable help, such input is often limited, expensive, and generally not in a form readily consumable by a model development pipeline. In this paper, we propose Ziva, a framework to guide domain experts in sharing essential domain knowledge to data scientists for building NLP models. With Ziva, experts are able to distill and share their domain knowledge using domain concept extractors and five types of label justification over a representative data sample. The design of Ziva is informed by preliminary interviews with data scientists, in order to understand current practices of domain knowledge acquisition process for ML development projects. To assess our design, we run a mix-method case-study to evaluate how Ziva can facilitate interaction of domain experts and data scientists. Our results highlight that (1) domain experts are able to use Ziva to provide rich domain knowledge, while maintaining low mental load and stress levels; and (2) data scientists find Ziva\u2019s output helpful for learning essential information about the domain, offering scalability of information, and lowering the burden" +"---\nabstract: 'In blind source separation of speech signals, the inherent imbalance in the source spectrum poses a challenge for methods that rely on single-source dominance for the estimation of the mixing matrix. We propose an algorithm based on the directional sparse filtering (DSF) framework that utilizes the Lehmer mean with learnable weights to adaptively account for source imbalance. Performance evaluation in multiple real acoustic environments show improvements in source separation compared to the baseline methods.'\naddress: |\n School of Electrical & Electronic Engineering\\\n Nanyang Technological University, Singapore\\\n Email: karn001@e.ntu.edu.sg, {nguyenhta, chinghui.ooi, andykhong}@ntu.edu.sg\nbibliography:\n- 'IEEEabrv.bib'\n- 'refs.bib'\ntitle: |\n DIRECTIONAL SPARSE FILTERING USING WEIGHTED LEHMER MEAN\\\n FOR BLIND SEPARATION OF UNBALANCED SPEECH MIXTURES\n---\n\nBlind source separation, sparse filtering, directional clustering, Lehmer mean, microphone array\n\nIntroduction {#sec:intro}\n============\n\nUnsupervised blind source separation (BSS) is the process of extracting source signals from its mixture with little to no prior information about the sources and without prior training using labelled data. In this paper, we focus on the problem of estimating the complex-valued mixing matrix from a multichannel observed mixture, particularly that of speech signals. We assume that the data, at each frequency bin, follow the noiseless linear mixing model $${\\mathbf{x}}[k]" +"---\nabstract: 'Privacy is an important concern when building statistical models on data containing personal information. Differential privacy offers a strong definition of privacy and can be used to solve several privacy concerns [@dwork2014algorithmic]. Multiple solutions have been proposed for the differentially-private transformation of datasets containing sensitive information. However, such transformation algorithms offer poor utility in Natural Language Processing (NLP) tasks due to noise added in the process. In this paper, we address this issue by providing a utility-preserving differentially private text transformation algorithm using auto-encoders. Our algorithm transforms text to offer robustness against attacks and produces transformations with high semantic quality that perform well on downstream NLP tasks. We prove the theoretical privacy guarantee of our algorithm and assess its privacy leakage under Membership Inference Attacks (MIA) [@shokri2017membership] on models trained with transformed data. Our results show that the proposed model performs better against MIA attacks while offering lower to no degradation in the utility of the underlying transformation process compared to existing baselines.'\nauthor:\n- |\n Satyapriya Krishna\\\n Amazon Alexa\\\n `satyapk@amazon.com`\\\n Rahul Gupta\\\n Amazon Alexa\\\n `gupra@amazon.com`\\\n Christophe Dupuy\\\n Amazon Alexa\\\n `dupuychr@amazon.com`\\\nbibliography:\n- 'anthology.bib'\n- 'eacl2021.bib'\ntitle: 'ADePT: Auto-encoder based Differentially Private Text Transformation'\n---\n\nIntroduction\n============\n\nDifferentially" +"---\nabstract: 'Aerosolized droplets play a central role in the transmission of various infectious diseases, including Legionnaire\u2019s disease, gastroenteritis-causing norovirus, and most recently COVID-19. Respiratory droplets are known to be the most prominent source of transmission for COVID-19, however, alternative routes may exist given the discovery of small numbers of viable viruses in urine and stool samples. Flushing biomatter can lead to the aerosolization of microorganisms, thus, there is a likelihood that bioaerosols generated in public restrooms may pose a concern for the transmission of COVID-19, especially since these areas are relatively confined, experience heavy foot traffic, and may suffer from inadequate ventilation. To quantify the extent of aerosolization, we measure the size and number of droplets generated by flushing toilets and urinals in a public restroom. The results indicate that the particular designs tested in the study generate a large number of droplets in the size range $0.3 \\mu m$ to $3 \\mu m$, which can reach heights of at least $1.52m$. Covering the toilet reduced aerosol levels but did not eliminate them completely, suggesting that aerosolized droplets escaped through small gaps between the cover and the seat. In addition to consistent increases in aerosol levels immediately after flushing," +"---\nabstract: 'As a fundamental problem in algorithmic trading, order execution aims at fulfilling a specific trading order, either liquidation or acquirement, for a given instrument. Towards effective execution strategy, recent years have witnessed the shift from the analytical view with model-based market assumptions to model-free perspective, i.e., reinforcement learning, due to its nature of sequential decision optimization. However, the noisy and yet imperfect market information that can be leveraged by the policy has made it quite challenging to build up sample efficient reinforcement learning methods to achieve effective order execution. In this paper, we propose a novel universal trading policy optimization framework to bridge the gap between the noisy yet imperfect market states and the optimal action sequences for order execution. Particularly, this framework leverages a policy distillation method that can better guide the learning of the common policy towards practically optimal execution by an oracle teacher with perfect information to approximate the optimal trading strategy. The extensive experiments have shown significant improvements of our method over various strong baselines, with reasonable trading actions.'\nauthor:\n- |\n Yuchen Fang, ^1^[^1] Kan Ren, ^2^ Weiqing Liu, ^2^ Dong Zhou, ^2^\\\n Weinan Zhang, ^1^ Jiang Bian, ^2^ Yong Yu, ^1^ Tie-Yan" +"---\nabstract: 'The rapid outbreak of COVID-19 has caused humanity to come to a stand-still and brought with it a plethora of other problems. COVID-19 is the first pandemic in history when humanity is the most technologically advanced and relies heavily on social media platforms for connectivity and other benefits. Unfortunately, fake news and misinformation regarding this virus is also available to people and causing some massive problems. So, fighting this infodemic has become a significant challenge. We present our solution for the \u201cConstraint@AAAI2021 - COVID19 Fake News Detection in English\u201d challenge in this work. After extensive experimentation with numerous architectures and techniques, we use eight different transformer-based pre-trained models with additional layers to construct a stacking ensemble classifier and fine-tuned them for our purpose. We achieved 0.979906542 accuracy, 0.979913119 precision, 0.979906542 recall, and 0.979907901 f1-score on the test dataset of the competition.'\nauthor:\n- 'S.M. Sadiq-Ur-Rahman Shifath'\n- Mohammad Faiyaz Khan\n- 'Md. Saiful Islam'\nbibliography:\n- 'main.bib'\ntitle: 'A transformer based approach for fighting COVID-19 fake news'\n---\n\nIntroduction\n============\n\nThe Coronavirus disease 2019 (COVID-19) is an infectious disease caused by SARS coronavirus 2. It has impacted almost every country and changed people worldwide\u2019s social, economic, and psychological" +"---\nabstract: 'Image captioning has focused on generalizing to images drawn from the same distribution as the training set, and not to the more challenging problem of generalizing to different distributions of images. Recently, [nikolaus-etal-2019-compositional]{} introduced a dataset to assess compositional generalization in image captioning, where models are evaluated on their ability to describe images with unseen adjective\u2013noun and noun\u2013verb compositions. In this work, we investigate different methods to improve compositional generalization by planning the syntactic structure of a caption. Our experiments show that jointly modeling tokens and syntactic tags enhances generalization in both RNN- and Transformer-based models, while also improving performance on standard metrics.'\nauthor:\n- Emanuele Bugliarello\n- |\n Desmond Elliott\\\n Department of Computer Science\\\n University of Copenhagen\\\n `{emanuele,de}@di.ku.dk`\nbibliography:\n- 'eacl2021.bib'\ntitle: The Role of Syntactic Planning in Compositional Image Captioning\n---\n\nIntroduction\n============\n\nImage captioning is a core task in multimodal NLP, where the aim is to automatically describe the content of an image in natural language. To succeed in this task, a model first needs to recognize and understand the properties of the image. Then, it needs to generate well-formed sentences, requiring both a syntactic and a semantic knowledge of the language [@hossain2019comprehensive]. Deep learning" +"---\nabstract: |\n Training machine learning models requires feeding input data for models to ingest. Input pipelines for machine learning jobs are often challenging to implement efficiently as they require reading large volumes of data, applying complex transformations, and transferring data to hardware accelerators while overlapping computation and communication to achieve optimal performance. We present [`tf.data`]{}, a framework for building and executing efficient input pipelines for machine learning jobs. The [`tf.data`]{}API provides operators which can be parameterized with user-defined computation, composed, and reused across different machine learning domains. These abstractions allow users to focus on the application logic of data processing, while [`tf.data`]{}\u2019s runtime ensures that pipelines run efficiently.\n\n We demonstrate that input pipeline performance is critical to the end-to-end training time of state-of-the-art machine learning models. [`tf.data`]{}delivers the high performance required, while avoiding the need for manual tuning of performance knobs. We show that [`tf.data`]{}features, such as parallelism, caching, static optimizations, and non-deterministic execution are essential for high performance. Finally, we characterize machine learning input pipelines for millions of jobs that ran in Google\u2019s fleet, showing that input data processing is highly diverse and consumes a significant fraction of job resources. Our analysis motivates future research directions, such as" +"---\nabstract: 'We propose [MultiRocket]{}, a fast time series classification (TSC) algorithm that achieves state-of-the-art accuracy with a tiny fraction of the time and without the complex ensembling structure of many state-of-the-art methods. [MultiRocket]{} improves on MiniRocket, one of the fastest TSC algorithms to date, by adding multiple pooling operators and transformations to improve the diversity of the features generated. In addition to processing the raw input series, MultiRocket also applies first order differences to transform the original series. Convolutions are applied to both representations, and four pooling operators are applied to the convolution outputs. When benchmarked using the University of California Riverside TSC benchmark datasets, [MultiRocket]{} is significantly more accurate than MiniRocket, and competitive with the best ranked current method in terms of accuracy, HIVE-COTE 2.0, while being orders of magnitude faster.'\nauthor:\n- Chang Wei Tan\n- Angus Dempster\n- Christoph Bergmeir\n- 'Geoffrey I. Webb'\nbibliography:\n- 'biblio.bib'\ndate: 'Received: date / Accepted: date'\ntitle: '[MultiRocket]{}: Multiple pooling operators and transformations for fast and effective time series classification [^1] '\n---\n\n[example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore\n\nIntroduction" +"---\nabstract: |\n In this paper we prove the existence of isoperimetric regions of any volume in Riemannian manifolds with Ricci bounded below assuming Gromov\u2013Hausdorff asymptoticity to the suitable simply connected model of constant sectional curvature.\n\n The previous result is a consequence of a general structure theorem for perimeter-minimizing sequences of sets of fixed volume on noncollapsed Riemannian manifolds with a lower bound on the Ricci curvature. We show that, without assuming any further hypotheses on the asymptotic geometry, all the mass and the perimeter lost at infinity, if any, are recovered by at most countably many isoperimetric regions sitting in some (possibly nonsmooth) Gromov\u2013Hausdorff limits at infinity.\n\n The Gromov\u2013Hausdorff asymptotic analysis allows us to recover and extend different previous existence theorems.\n\n While studying the isoperimetric problem in the smooth setting, the nonsmooth geometry naturally emerges, and thus our treatment combines techniques from both the theories.\nauthor:\n- 'Gioacchino Antonelli[^1]'\n- 'Mattia Fogagnolo[^2]'\n- 'Marco Pozzetta[^3]'\nbibliography:\n- 'Bibliography.bib'\ntitle: 'The isoperimetric problem on Riemannian manifolds via Gromov\u2013Hausdorff asymptotic analysis'\n---\n\n**MSC (2020).** Primary: 49J45, 26B30, 53A35. Secondary: 53C23, 49J52.\\\n**Keywords.** Gromov\u2013Hausdorff convergence, isoperimetric problem, Ricci curvature, RCD spaces, finite perimeter sets.\n\nIntroduction\n============\n\nThe classical isoperimetric problem can be" +"---\nabstract: 'Blockchain interoperability is a prominent research field which aims to build bridges between otherwise isolated blockchains. With advances in cryptography, novel protocols are published by academia and applied in different applications and products in the industry. In theory, these innovative protocols provide strong privacy and security guarantees by including formal proofs. However, pure theoretical work often lacks the perspective of real world applications. In this work, we describe a number of hardly researched problems which developers encounter when building cross-chain products.'\nauthor:\n- Thomas Eizinger\n- Philipp Hoenisch\n- Lucas Soriano del Pino\nbibliography:\n- 'samplepaper.bib'\ntitle: 'Open problems in cross-chain protocols'\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nThe domain of blockchain technology has been a prominent research field for industry and academia ever since Bitcoin was introduced in 2008 [@nakamoto2008]. Its central idea is simple: to provide a trustless and censorship-resistant way of transferring asset ownership between parties. Besides Bitcoin, a blockchain ecosystem has evolved over the years with hundreds of different implementations. Most blockchains provide their own coin which is used to pay for transaction fees, smart contract executions or as *digital cash*.\n\nThe age-old problem of interoperability, well studied in various other computer systems, is now" +"---\nabstract: 'This paper proposes a novel monitoring methodology for car-following control of automated vehicles that uses real-time measurements of spacing and velocity obtained through vehicle sensors. This study focuses on monitoring the time gap, a key parameter that dictates the desired following spacing of the controlled vehicle. The goal is to monitor deviations in actual time gap from a desired setting and detect when it deviates beyond a control limit. A random coefficient modeling is developed to systematically capture the stochastic distribution of the time gap and derive a closed-form Bayesian updating scheme for real-time inference. A control chart is then adopted to systematically set the control limits and inform when the time gap setting should be changed. Simulation experiments are performed to demonstrate the effectiveness of the proposes method for monitoring the time gap and alerting when the parameter setting needs to be changed.'\nauthor:\n- |\n Wissam Kontar\\\n Department of Civil and Environmental Engineering\\\n University of Wisconsin-Madison\\\n Madison, WI 53706\\\n `kontar@wisc.edu`\\\n Soyoung Ahn [^1]\\\n Department of Civil and Environmental Engineering\\\n University of Wisconsin-Madison\\\n Madison, WI 53706\\\n `sue.ahn@wisc.edu`\\\nbibliography:\n- 'references.bib'\ntitle: 'Real-time Monitoring of Autonomous Vehicle\u2019s Time Gap Variations: A Bayesian Framework'\n---\n\nAutomated vehicles (AV) have" +"---\nauthor:\n- 'Akshita Gupta\\*, Sanath Narayan\\*, Salman Khan, Fahad Shahbaz Khan, Ling Shao, Joost van de Weijer'\nbibliography:\n- 'main.bib'\ntitle: 'Generative Multi-Label Zero-Shot Learning'\n---\n\n[Shell : Bare Demo of IEEEtran.cls for Computer Society Journals]{}\n\nclassification is a challenging problem where the task is to recognize all labels in an image. Typical examples of multi-label classification include, MS COCO\u00a0[@coco] and NUS-WIDE\u00a0[@nuswide] datasets, where an image may contain several different categories (labels). Most recent multi-label classification approaches address the problem by utilizing attention mechanisms\u00a0[@wang2017multi; @yeattention; @you2020cross], recurrent neural networks\u00a0[@wang2016cnn; @yazici2020orderless; @nam2017maximizing], graph CNNs\u00a0[@kipf2016semi; @chen2019multi] and label correlations\u00a0[@weston2011wsabie; @durand2019learning]. However, these approaches do not tackle the problem of multi-label zero-shot classification, where the task is to classify images into multiple new \u201cunseen\u201d categories at test time, without being given any corresponding visual example during the training. Different from zero-shot learning (ZSL), the test samples can belong to the seen or unseen classes in generalized zero-shot learning (GZSL). Here, we tackle the challenging problem of large-scale multi-label ZSL and GZSL.\n\nExisting multi-label (G)ZSL approaches address the problem by utilizing global image representations\u00a0[@mensink2014costa; @zhang2016fast], structured knowledge graphs\u00a0[@lee2018multi] and attention-based mechanisms\u00a0[@huynh2020shared]. In contrast to" +"---\nauthor:\n- Ayan Paul\n- Jayanta Kumar Bhattacharjee\n- Akshay Pal\n- Sagar Chakraborty\nbibliography:\n- 'bibliography.bib'\ntitle: 'Emergence of universality in the transmission dynamics of COVID-19'\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nThe spread of SARS-CoV-2 has left significant instabilities in the socioeconomic fabric of the society. While the spreading dynamics of the disease is not novel\u00a0[@doi:10.1056/NEJMoa2001316], the instabilities it has caused has made various parts of the society, and notably, various governance, respond to containing its spread in very different manners\u00a0[@Dehningeabb9789; @10.1093/jtm/taaa020; @SOHRABI202071; @zhang2020covid; @Fisher2020]. The determination of the optimal strategy has been quite a challenge and highly dependent on the socio-economic condition of the country or region\u00a0[@Fraser6146; @Flaxman2020; @2020arXiv201215230P; @Kuhn2020.12.18.20248509]. A lot of effort has been spent trying to bring some predictability in the spread of the pandemic and even a few weeks of foresight can not only save an economy from being jettisoned but also save a considerable number of lives that need not be lost. Moreover, the experience of the past few months indicate that controlling the resurgence of the disease is a formidable task.\n\nVarious kinds of models have been used to describe the spread of COVID-19 with varying degree of" +"---\nabstract: 'This paper describes joint effort of BUT and Telef\u00f3nica Research on development of Automatic Speech Recognition systems for Albayzin 2020 Challenge. We compare approaches based on either hybrid or end-to-end models. In hybrid modelling, we explore the impact of SpecAugment[@maliddi:is:2016:specaug; @park:IS:2019:specaug] layer on performance. For end-to-end modelling, we used a convolutional neural network with gated linear units (GLUs). The performance of such model is also evaluated with an additional n-gram language model to improve word error rates. We further inspect source separation methods to extract speech from noisy environment (i.e. TV shows). More precisely, we assess the effect of using a neural-based music separator named Demucs[@defossez2019demucs]. A fusion of our best systems achieved 23.33%\u00a0WER in official Albayzin 2020 evaluations. Aside from techniques used in our final submitted systems, we also describe our efforts in retrieving high-quality transcripts for training.'\naddress: |\n [$^1$]{}Brno University of Technology, Speech@FIT, IT4I CoE\\\n [$^2$]{}Telef\u00f3nica Research\\\n [$^{3}$]{}Universitat Pompeu Fabra\\\n [$^4$]{}Universitat de Barcelona \nbibliography:\n- 'mybib.bib'\ntitle: 'BCN2BRNO: ASR System Fusion for Albayzin 2020 Speech to Text Challenge'\n---\n\n**Index Terms**: fusion, end-to-end model, hybrid model, semi-supervised, automatic speech recognition, convolutional neural network.\n\nIntroduction\n============\n\nAlbayzin 2020 challenge is a continuation of the Albayzin" +"---\nbibliography:\n- 'ref.bib'\n---\n\nIntroduction {#sec:introduction}\n============\n\nIn today\u2019s data-driven world, protecting the privacy of individuals\u2019 information is of the utmost importance to data curators, both as an ethical consideration and as a legal requirement, e.g. Article 29 of the European Union\u2019s General Data Protection Regulation describes privacy risks as singling out, linkability and inference.\n\nSequential data, such as DNA sequences, textual data and mobility traces, is being increasingly used in a variety of real-life applications, spanning from genome and language modeling to location-based recommendation systems. However, using such data poses considerable threats to individual privacy. It might be used by a malicious adversary to discover potential sensitive information about a data owner such as their habits, religion or relationships.\n\nData anonymisation is a popular means of privacy preservation in datasets. One such example is the K-anonymity framework [@knon], [@dp-any], which anonymises data by generalising quasi identifiers, ensuring that an individual\u2019s data is indistinguishable from at least ($k-1$) others\u2019. However, even the K-anonymity approach still poses privacy concerns, since it is deterministic and susceptible to privacy attacks, such as linkage attacks. It is therefore urgent to respond to the failure of existing anonymisation techniques by developing new schemes with" +"---\nabstract: 'We explore the consequences of a time-dependent inflaton Equation-of-State (EoS) parameter in the context of the post-inflationary perturbative Boltzmann reheating. In particular, we numerically solve the perturbative coupled system of Boltzmann equations involving the inflaton energy density, the radiation energy density and the related entropy density and temperature of the produced particle thermal bath. We exploit reasonable Ans\u00e4tze for the EoS and discuss the robustness of the Boltzmann system. We also comment on the possible microscopic origin related to a time dependent inflaton potential, discussing the consequences on a preheating stage and the related (primordial) gravitational waves.'\nauthor:\n- Alessandro Di Marco\n- Gianfranco Pradisi\nbibliography:\n- 'apssamp.bib'\ntitle: Variable Inflaton Equation of State and Reheating\n---\n\nIntroduction\n============\n\nThe slow-roll inflationary scenario [@1; @2; @3; @4; @5; @6] is based on the introduction of a neutral, homogeneous and minimally coupled scalar field $\\phi$, the inflaton, usually equipped with an effective potental $V(\\phi)$ characterized by an almost flat region and a fundamental vacuum state. In the early phase of inflation the scalar field, displaced from the minima of its potential, slowly moves through the almost flat region of $V(\\phi)$, covering a distance $\\Delta\\phi$ [@7] and mimicking a false" +"---\nabstract: 'Traditional manual age estimation method is crucial labour based on many kinds of X-Ray image. Some current studies have shown that lateral cephalometric(LC) images can be used to estimate age. However, these methods are based on manually measuring some image features and making age estimates based on experience or scoring. Therefore, these methods are time-consuming and labor-intensive, and the effect will be affected by subjective opinions. In this work, we propose a saliency map-enhanced age estimation method, which can automatically perform age estimation based on LC images. Meanwhile it can also show the importance of each region in the image for age estimation, which undoubtedly increases the method\u2019s Interpretability. Our method was tested on 3014 LC images from 4 to 40 years old. The MEA of the experimental result is 1.250, which is less than the result of the state-of-the-art benchmark, because it performs significantly better in the age group with less data. In addition, our model is trained in each area with high contribution to age estimation in LC images, so the effect of these different areas on the age estimation task were verified. Consequently, we conclude that the proposed saliency map enhancemented chronological age estimation method" +"---\nabstract: 'Shortening acquisition time and reducing the motion-artifact are two of the most critical issues in MRI. As a promising solution, high-quality MRI image restoration provides a new approach to achieve higher resolution without costing additional acquisition time, modification on the pulse sequences or repeating the acquisition. Recently, as to the rise of deep learning, convolutional neural networks(CNNs) have been proposed to generate super-resolution images and reduce the motion-artifact for MRI applications. Recent studies suggest that using perceptual feature space loss and k space loss to capture the perceptual information and high-frequency information of images, respectively. However, the quality of reconstructed super-resolution and motion-artifact reduced MR image is limited cause the most important details of the informative area in MR image, the edges and the structure, can not be very well generated. Besides, lots of the super-resolution approaches are trained by using low-resolution images generated by applying bicubic or blur-downscale degradation, which can not represent the real process of MRI measurement. Such inconsistencies lead to performance degradation in the reconstruction of super-resolution MR images as well. This study reveals that using the L1 loss of SSIM and gradient map edge quality loss could force the deep learning model to" +"---\nabstract: 'This work develops problem statements related to encoders and autoencoders with the goal of elucidating variational formulations and establishing clear connections to information-theoretic concepts. Specifically, four problems with varying levels of input are considered : a) The data, likelihood and prior distributions are given, b) The data and likelihood are given; c) The data and prior are given; d) the data and the dimensionality of the parameters is specified. The first two problems seek encoders (or the posterior) and the latter two seek autoencoders (i.e. the posterior and the likelihood). A variational Bayesian setting is pursued, and detailed derivations are provided for the resulting optimization problem. Following this, a linear Gaussian setting is adopted, and closed form solutions are derived. Numerical experiments are also performed to verify expected behavior and assess convergence properties. Explicit connections are made to rate-distortion theory, information bottleneck theory, and the related concept of sufficiency of statistics is also explored. One of the motivations of this work is to present the theory and learning dynamics associated with variational inference and autoencoders, and to expose information theoretic concepts from a computational science perspective.'\nauthor:\n- |\n Karthik Duraisamy\\\n [*Department of Aerospace Engineering* ]{}\\\n [*University of" +"---\nabstract: 'The topological transitions that occur to the grain boundary network during grain growth in a material with uniform grain boundary energies are believed to be known. The same is not true for more realistic materials, since more general grain boundary energies in principle allow many more viable grain boundary configurations. A simulation of grain growth in such a material therefore requires a procedure to enumerate all possible topological transitions and select the most energetically favorable one. Such a procedure is developed and implemented here for a microstructure represented by a volumetric finite element mesh. As a specific example, all possible transitions for a typical configuration with five grains around a junction point are enumerated, and some exceptional transitions are found to be energetically similar to the conventional ones even for a uniform boundary energy. A general discrete formulation to calculate grain boundary velocities is used to simulate grain growth for an example microstructure. The method is implemented as a C++ library based on SCOREC, an open source massively parallelizable library for finite element simulations with adaptive meshing.'\nauthor:\n- Erdem Eren\n- 'Jeremy K. Mason'\ntitle: Topological transitions during grain growth on a finite element mesh\n---\n\n=1" +"---\nabstract: 'Detecting the presence of persons and estimating their quantity in an indoor environment has grown in importance recently. For example, the information if a room is unoccupied can be used for automatically switching off the light, air conditioning, and ventilation, thereby saving significant amounts of energy in public buildings. Most existing solutions rely on dedicated hardware installations, which involve presence sensors, video cameras, and carbon dioxide sensors. Unfortunately, such approaches are costly, are subject to privacy concerns, have high computational requirements, and lack ubiquitousness. The work presented in this article addresses these limitations by proposing a low-cost occupancy detection system. Our approach builds upon detecting variations in Bluetooth Low Energy (BLE) signals related to the presence of humans. The effectiveness of this approach is evaluated by performing comprehensive tests on five different datasets. We apply several pattern recognition models and compare our methodology with systems building upon IEEE 802.11 (WiFi). On average, in multifarious environments, we can correctly classify the occupancy with an accuracy of 97.97%. When estimating the number of people in a room, on average, the estimated number of subjects differs from the actual one by 0.32 persons. We conclude that our system\u2019s performance is comparable" +"---\nabstract: 'We consider the finite set of isogeny classes of $g$\u2013dimensional abelian varieties defined over the finite field ${\\mathbb{F}_{q^{}}}$ with endomorphism algebra being a field. We prove that the class within this set whose varieties have maximal number of rational points is unique, for any prime even power $q$ big enough and verifying mild conditions. We describe its Weil polynomial and we prove that the class is ordinary and cyclic outside the primes dividing an integer that only depends on $g$. In dimension $3$, we prove that the class is ordinary and cyclic and give explicitly its Weil polynomial, for any prime even power $q$.'\naddress:\n- 'Laboratoire d\u2019informatique de l\u2019\u00c9cole polytechnique (LIX), CNRS, \u00c9cole polytechnique, Institut Polytechnique de Paris, 91120 Palaiseau, France'\n- 'Universit[\u00e9]{} Polytechnique Hauts-de-France, Laboratoire de Math[\u00e9]{}matiques pour l\u2019Ing[\u00e9]{}nieur (LMI), FR CNRS 2956, F-59313 Valenciennes, France and Universidad Nacional de Asunci[\u00f3]{}n, Facultad de Ingenier[\u00ed]{}a, Paraguay'\nauthor:\n- Elena Berardini\n- 'Alejandro\u00a0J.\u00a0Giangreco-Maidana'\nbibliography:\n- 'BG-maximal\\_ab\\_var.bib'\ntitle: Weil polynomials of abelian varieties over finite fields with many rational points\n---\n\nIntroduction\n============\n\nIn the present paper we study abelian varieties defined over finite fields with groups of rational points of large cardinality, and their cyclicity. Arithmetic" +"---\nabstract: 'We illustrate the observability of the end stages of the earliest (Population III) stars at high redshifts $z \\gtrsim 10$, using the recently observed transient, GN-z11-flash as an example. We find that the observed spectrum of this transient is consistent with its originating from a shock-breakout in a Population III supernova occurring in the GN-z11 galaxy at $z \\sim 11$. The energetics of the explosion indicate a progenitor star of mass $\\sim 300 M_{\\odot}$ in that galaxy, with of order unity such events expected over an observing timescale of a few years. We forecast the expected number of such transients from $z > 10$ galaxies as a function of their host stellar mass and star formation rate. Our findings are important in the context of future searches to detect and identify the signatures of galaxies at Cosmic Dawn.'\n---\n\n[**Signatures of Population III supernovae at Cosmic Dawn: the case of GN-z11-flash**]{}\n\nHamsa Padmanabhan$^{1}$ & Abraham Loeb$^{2}$\n\n*$^{1}$ D\u00e9partement de Physique Th\u00e9orique, Universit\u00e9 de Gen\u00e8ve*\n\n*24 quai Ernest-Ansermet, CH 1211 Gen\u00e8ve 4, Switzerland*\n\nemail: hamsa.padmanabhan@unige.ch\n\n*$^{2}$ Astronomy department, Harvard University*\n\n*60 Garden Street, Cambridge, MA 02138, USA*\n\nemail: aloeb@cfa.harvard.edu\n\n0.2in\n\n------------------------------------------------------------------------\n\n0.2in\n\nIntroduction\n============\n\nThe first stars in the" +"---\nabstract: 'Standard models for syntactic dependency parsing take words to be the elementary units that enter into dependency relations. In this paper, we investigate whether there are any benefits from enriching these models with the more abstract notion of nucleus proposed by Tesni\u00e8re. We do this by showing how the concept of nucleus can be defined in the framework of Universal Dependencies and how we can use composition functions to make a transition-based dependency parser aware of this concept. Experiments on 12 languages show that nucleus composition gives small but significant improvements in parsing accuracy. Further analysis reveals that the improvement mainly concerns a small number of dependency relations, including nominal modifiers, relations of coordination, main predicates, and direct objects.'\nauthor:\n- |\n Ali Basirat\\\n Uppsala University\\\n Dept.\u00a0of Linguistics and Philology\\\n `ali.basirat@lingfil.uu.se`\\\n Joakim Nivre\\\n Uppsala University\\\n Dept.\u00a0of Linguistics and Philology\\\n `joakim.nivre@lingfil.uu.se`\\\nbibliography:\n- 'references.bib'\ntitle: 'Syntactic Nuclei in Dependency Parsing \u2013 A Multilingual Exploration'\n---\n\nIntroduction\n============\n\nA syntactic dependency tree consists of directed arcs, representing syntactic relations like subject and object, connecting a set of nodes, representing the elementary syntactic units of a sentence. In contemporary dependency parsing, it is generally assumed that the elementary units" +"---\nabstract: 'Topological insulators (TIs) are expected to be a promising platform for novel quantum phenomena, whose experimental realizations require sophisticated devices. In this Technical Review, we discuss four topics of particular interest for TI devices: topological superconductivity, quantum anomalous Hall insulator as a platform for exotic phenomena, spintronic functionalities, and topological mesoscopic physics. We also discuss the present status and technical challenges in TI device fabrications to address new physics.'\nauthor:\n- Oliver Breunig\n- Yoichi Ando\ntitle: Opportunities in topological insulator devices\n---\n\nIntroduction\n============\n\nAfter more than 10 years of research, the understanding of topological insulator (TI) materials[@Ando2013] has been well advanced. The next step is to use them as a platform for devices to realize novel and useful topological phenomena, such as emergence of chiral Majorana fermions[@He2017; @Kayyalha2020], topological qubits using Majorana zero-modes[@Aguado2020; @Manousakis2017], or topological magnetoelectric effects[@Qi2008] in the axion insulator state[@Mogi2017; @Xiao2018] (these concepts are explained later). Also, mesoscopic physics of the topological states of matter is a rich realm[@Muenning2021], but it has been largely left unexplored. Hence, TI devices provide promising opportunities for new discoveries.\n\nTIs are characterized by a nontrivial $Z_2$ topology of their bulk electronic wave functions, which leads to the" +"---\nabstract: 'As the COVID-19 spreads across the world, prevention measures are becoming the essential weapons to combat against the pandemic in the period of crisis. The lockdown measure is the most controversial one as it imposes an overwhelming impact on our economy and society. Especially when and how to enforce the lockdown measures are the most challenging questions considering both economic and epidemiological costs. In this paper, we extend the classic SIR model to find optimal decision making to balance between economy and people\u2019s health during the outbreak of COVID-19. In our model, we intend to solve a two phases optimisation problem: policymakers control the lockdown rate to maximise the overall welfare of the society; people in different health statuses take different decisions on their working hours and consumption to maximise their utility. We develop a novel method to estimate parameters for the model through various additional sources of data. We use the Cournot equilibrium to model people\u2019s behaviour and also consider the cost of death in order to leverage between economic and epidemic costs. The analysis of simulation results provides scientific suggestions for policymakers to make critical decisions on when to start the lockdown and how strong it" +"---\naddress: |\n Department of Mathematics, University of California at Berkeley, CA\\\n Department of Mathematics, Harvard University, Cambridge, MA\nauthor:\n- 'Donghyun Kim[^1],'\n- 'Lauren K. Williams[^2]'\nbibliography:\n- 'sample.bib'\ntitle: Schubert polynomials and the inhomogeneous TASEP on a ring\n---\n\nIntroduction\n============\n\nIn recent years, there has been a lot of work on interacting particle models such as the *asymmetric simple exclusion process* (ASEP), a model in which particles hop on a one-dimensional lattice subject to the condition that at most one particle may occupy a given site. The ASEP on a one-dimensional lattice with open boundaries has been linked to Askey-Wilson polynomials and Koornwinder polynomials [@CW1; @C2; @CW2], while the ASEP on a ring has been linked to Macdonald polynomials [@CGW; @CMW]. The *inhomogeneous totally asymmetric simple exclusion process* (TASEP) is a variant of the exclusion process on the ring in which the hopping rate depends on the weight of the particles. In this paper we build on works of Lam-Williams [@LW], Ayyer-Linusson [@AL], and especially Cantini [@C] to give formulas for many steady state probabilities of the inhomogeneous TASEP on a ring in terms of Schubert polynomials.\n\n\\[def:TASEP\\] Consider a lattice with $n$ sites arranged in a" +"---\nabstract: 'Turbulence has the potential for creating gas density enhancements that initiate cloud and star formation (SF), and it can be generated locally by SF. To study the connection between turbulence and SF, we looked for relationships between SF traced by FUV images, and gas turbulence traced by kinetic energy density (KED) and velocity dispersion ([$v_{disp}$]{}) in the LITTLE THINGS sample of nearby [dIrr]{}galaxies. We performed 2D cross-correlations between FUV and KED images, measured cross-correlations in annuli to produce correlation coefficients as a function of radius, and determined the cumulative distribution function of the cross correlation value. We also plotted on a pixel-by-pixel basis the locally excess KED, [$v_{disp}$]{}, and [H$\\,$[i]{}]{}\u00a0mass surface density, [$\\Sigma_{\\rm HI}$]{}, as determined from the respective values with the radial profiles subtracted, versus the excess SF rate density [$\\Sigma_{\\rm SFR}$]{}, for all regions with positive excess [$\\Sigma_{\\rm SFR}$]{}. We found that [$\\Sigma_{\\rm SFR}$]{}\u00a0and KED are poorly correlated. The excess KED associated with SF implies a $\\sim0.5$% efficiency for supernova energy to pump local [H$\\,$[i]{}]{}turbulence on the scale of resolution here, which is a factor of $\\sim2$ too small for all of the turbulence on a galactic scale. The excess [$v_{disp}$]{}\u00a0in SF regions" +"---\nabstract: |\n I introduce novel preference formulations which capture aversion to ambiguity about unknown and potentially time-varying volatility. I compare these preferences with Gilboa and Schmeidler\u2019s maxmin expected utility as well as variational formulations of ambiguity aversion. The impact of ambiguity aversion is illustrated in a simple static model of portfolio choice, as well as a dynamic model of optimal contracting under repeated moral hazard. Implications for investor beliefs, optimal design of corporate securities, and asset pricing are explored.\\\n **JEL Classification:** D81, D86, G11, G12, G32\\\n **Keywords:** ambiguity, stochastic volatility, moral hazard, capital structure, asset pricing\nauthor:\n- 'Peter G. Hansen[^1]'\nbibliography:\n- 'ambiguous.bib'\ndate: \ntitle: 'New Formulations of Ambiguous Volatility with an Application to Optimal Dynamic Contracting[^2]'\n---\n\nIntroduction\n============\n\nThere is ample evidence that time-varying stochastic volatility exists and has important effects on real macroeconomic variables and is important in understanding empirical features of financial markets. The empirical evidence suggests that volatility follows complicated nonlinear dynamics, which often leads model builders to write down complicated parametric models of the evolution of volatility as well as its correlation with other economic quantities of interest. An obvious concern with this approach is whether it is possible for economic agents" +"---\nabstract: 'The Fourier transform proves indispensable in the processing of classical information as well as in the quantum domain, where it finds many applications ranging from state reconstruction to prime factorization. An implementation scheme of the $d$-dimensional Fourier transform acting on single photons is known that uses the path encoding and requires $O(d \\log d)$ optical elements. In this paper we present an alternative design that uses the orbital angular momentum as a carrier of information and needs only $O(\\sqrt{d}\\log d)$ elements, rendering the path-encoded design inefficient. The advantageous scaling and the fact that our approach uses only conventional optical elements allows for the implementation of a 256-dimensional Fourier transform with the existing technology. Improvements of our design, as well as explicit setups for low dimensions, are also presented.'\nauthor:\n- Jaroslav Kysela\nbibliography:\n- 'ref.bib'\ntitle: 'High-dimensional quantum Fourier transform of twisted light'\n---\n\nIntro\n=====\n\nThe Fourier transform is arguably one of the most important tools in modern mathematics, science and engineering. Its applications range from a purely mathematical use in differential calculus [@osgood2019lectures] to modelling optical properties of light such as a free-space propagation or a propagation through a system of lenses [@tysonFourier]. On a more" +"---\nabstract: |\n For any given graph $H$, one may define a natural corresponding functional $\\|.\\|_H$ for real-valued functions by using homomorphism density. One may also extend this to complex-valued functions, once $H$ is paired with a $2$-edge-colouring $\\alpha$ to assign conjugates. We say that $H$ is *real-norming* (resp. *complex-norming*) if $\\|.\\|_H$ (resp. $\\|.\\|_{H,\\alpha}$ for some $\\alpha$) is a norm on the vector space of real-valued (resp. complex-valued) functions. These generalise the Gowers octahedral norms, a widely used tool in extremal combinatorics to quantify quasirandomness.\n\n We unify these two seemingly different notions of graph norms in real- and complex-valued settings. Namely, we prove that $H$ is complex-norming if and only if it is real-norming and simply call the property *norming*. Our proof does not explicitly construct a suitable $2$-edge-colouring $\\alpha$ but obtains its existence and uniqueness, which may be of independent interest.\n\n As an application, we give various example graphs that are not norming. In particular, we show that hypercubes are not norming, which resolves the last outstanding problem posed in Hatami\u2019s pioneering work on graph norms.\nauthor:\n- 'Joonkyung Lee[^1]'\n- 'Alexander Sidorenko[^2]'\nbibliography:\n- 'references.bib'\ntitle: 'On graph norms for complex-valued functions'\n---\n\nIntroduction\n============\n\nOne of the" +"---\nabstract: 'We propose a co-design approach for *compute-in-memory* inference for deep neural networks (DNN). We use multiplication-free function approximators based on $\\ell_1$ norm along with a co-adapted processing array and compute flow. Using the approach, we overcame many deficiencies in the current *art* of in-SRAM DNN processing such as the need for digital-to-analog converters (DACs) at each operating SRAM row/column, the need for high precision analog-to-digital converters (ADCs), limited support for multi-bit precision weights, and limited vector-scale parallelism. Our co-adapted implementation seamlessly extends to multi-bit precision weights, it doesn\u2019t require DACs, and it easily extends to higher vector-scale parallelism. We also propose an SRAM-immersed successive approximation ADC (SA-ADC), where we exploit the parasitic capacitance of bit lines of SRAM array as a capacitive DAC. Since the dominant area overhead in SA-ADC comes due to its capacitive DAC, by exploiting the intrinsic parasitic of SRAM array, our approach allows low area implementation of within-SRAM SA-ADC. Our 8$\\times$62 SRAM macro, which requires a 5-bit ADC, achieves $\\sim$105 tera operations per second per Watt (TOPS/W) with 8-bit input/weight processing at 45 nm CMOS. Our 8$\\times$30 SRAM macro, which requires a 4-bit ADC, achieves $\\sim$84 TOPS/W. SRAM macros that require lower ADC precision" +"---\nabstract: 'In the attention economy, video apps employ design mechanisms like autoplay that exploit psychological vulnerabilities to maximize watch time. Consequently, many people feel a lack of agency over their app use, which is linked to negative life effects such as loss of sleep. Prior design research has innovated *external mechanisms* that police multiple apps, such as lockout timers. In this work, we shift the focus to how the *internal mechanisms* of an app can support user agency, taking the popular YouTube mobile app as a test case. From a survey of 120 U.S. users, we find that autoplay and recommendations primarily undermine sense of agency, while search and playlists support it. From 13 co-design sessions, we find that when users have a specific intention for how they want to use YouTube they prefer interfaces that support greater agency. We discuss implications for how designers can help users reclaim a sense of agency over their media use.'\nauthor:\n- Kai Lukoff\n- Ulrik lyngs\n- Himanshu Zade\n- 'J. Vera Liao'\n- James Choi\n- Kaiyue Fan\n- 'Sean A. Munson'\n- Alexis Hiniker\nbibliography:\n- 'references.bib'\nnocite: '[@*]'\ntitle: How the Design of YouTube Influences User Sense of" +"---\nabstract: 'We investigate features of the deconfinement phase transition in an $SU(N_c)$ gauge theory as revealed by fluctuations of the order parameter. The tool of choice is an effective model built from one-loop expressions of the field determinants of gluon and ghost, in the presence of a Polyakov loop background field. We show that the curvature masses associated with the Cartan angles, which serve as a proxy to study the $A_0$-gluon screening mass, show a characteristic dip in the vicinity of the transition temperature. The strength of the observables, which reflects a competition between the confining and the deconfining forces, is sensitive to assumptions of dynamics, and thus provides an interesting link between the $Z(N_c)$ vacuum structure and the properties of gluon and ghost propagators.'\nauthor:\n- Pok Man\n- Krzysztof Redlich\n- Chihiro Sasaki\nbibliography:\n- 'ref.bib'\ntitle: 'Fluctuations of the order parameter in an $SU(N_c)$ effective model'\n---\n\nIntroduction\n============\n\nIn this work we study the fluctuations of the order parameter in an $SU(N_c)$ gauge theory within an effective model. Unlike the order parameter, these observables are finite and temperature dependent even in the confined phase, thus providing important diagnostic information about the mechanism of deconfinement phase" +"---\nabstract: 'Various methods can obtain certified estimates for roots of polynomials. Many applications in science and engineering additionally utilize the value of functions evaluated at roots. For example, critical values are obtained by evaluating an objective function at critical points. For analytic evaluation functions, Newton\u2019s method naturally applies to yield certified estimates. These estimates no longer apply, however, for H\u00f6lder continuous functions, which are a generalization of Lipschitz continuous functions where continuous derivatives need not exist. This work develops and analyzes an alternative approach for certified estimates of evaluating locally H\u00f6lder continuous functions at roots of polynomials.\u00a0An implementation of the method in [Maple]{}\u00a0demonstrates\u00a0efficacy\u00a0and\u00a0efficiency.'\nauthor:\n- 'Parker B. Edwards'\n- \n- \ntitle: Certified evaluations of H\u00f6lder continuous functions at roots of polynomials\n---\n\nIntroduction {#sec:Intro}\n============\n\nFor a univariate polynomial $p(x)$, the Abel-Ruffini theorem posits that the roots cannot be expressed in terms of radicals for general polynomials of degree at least\u00a0$5$. A simple illustration of this is that the solutions of the quintic equation $$\\label{eq:SimpleQuintic}\np(x) = x^5 - x - 1 = 0$$ cannot be expressed in radicals. Thus, a common technique is to compute numerical approximations with certified bounds for the" +"---\nabstract: 'NEID is a high-resolution optical spectrograph on the WIYN 3.5-m telescope at Kitt Peak National Observatory and will soon join the new generation of extreme precision radial velocity instruments in operation around the world. We plan to use the instrument to conduct the NEID Earth Twin Survey (NETS) over the course of the next 5 years, collecting hundreds of observations of some of the nearest and brightest stars in an effort to probe the regime of Earth-mass exoplanets. Even if we take advantage of the extreme instrumental precision conferred by NEID, it will remain difficult to disentangle the weak ($\\sim$10 [cm s$^{-1}$]{}) signals induced by such low-mass, long-period exoplanets from stellar noise for all but the quietest host stars. In this work, we present a set of quantitative selection metrics which we use to identify an initial NETS target list consisting of stars conducive to the detection of exoplanets in the regime of interest. We also outline a set of observing strategies with which we aim to mitigate uncertainty contributions from intrinsic stellar variability and other sources of noise.'\nauthor:\n- 'Arvind F.\u00a0Gupta'\n- 'Jason T.\u00a0Wright'\n- Paul Robertson\n- Samuel Halverson\n- Jacob Luhn\n-" +"---\nabstract: 'The evaluation of obstructions (stenosis) in coronary arteries is currently done by a physician\u2019s visual assessment of coronary angiography video sequences. It is laborious, and can be susceptible to interobserver variation. Prior studies have attempted to automate this process, but few have demonstrated an integrated suite of algorithms for the end-to-end analysis of angiograms. We report an automated analysis pipeline based on deep learning to rapidly and objectively assess coronary angiograms, highlight coronary vessels of interest, and quantify potential stenosis. We propose a 3-stage automated analysis method consisting of key frame extraction, vessel segmentation, and stenosis measurement. We combined powerful deep learning approaches such as ResNet and U-Net with traditional image processing and geometrical analysis. We trained and tested our algorithms on the Left Anterior Oblique (LAO) view of the right coronary artery (RCA) using anonymized angiograms obtained from a tertiary cardiac institution, then tested the generalizability of our technique to the Right Anterior Oblique (RAO) view. We demonstrated an overall improvement on previous work, with key frame extraction top-5 precision of 98.4%, vessel segmentation F1-Score of 0.891 and stenosis measurement 20.7% Type I Error rate.'\nbibliography:\n- 'main.bib'\ntitle: |\n Automated Deep Learning Analysis of Angiography\\\n Video" +"---\nabstract: 'Probabilistic time series forecasting involves estimating the distribution of future based on its history, which is essential for risk management in downstream decision-making. We propose a deep state space model for probabilistic time series forecasting whereby the non-linear emission model and transition model are parameterized by networks and the dependency is modeled by recurrent neural nets. We take the automatic relevance determination (ARD) view and devise a network to exploit the exogenous variables in addition to time series. In particular, our ARD network can incorporate the uncertainty of the exogenous variables and eventually helps identify useful exogenous variables and suppress those irrelevant for forecasting. The distribution of multi-step ahead forecasts are approximated by Monte Carlo simulation. We show in experiments that our model produces accurate and sharp probabilistic forecasts. The estimated uncertainty of our forecasting also realistically increases over time, in a spontaneous manner.'\nauthor:\n- 'Longyuan Li$^{1,2}$'\n- 'Junchi Yan$^{2,3}$[^1]'\n- |\n Xiaokang Yang$^{2,3}$Yaohui Jin$^{1,2*}$\\\n $^1$State Key Lab of Advanced Optical Communication System wand Network\\\n $^2$MoE Key Lab of Artificial Intelligence, AI Institute\\\n $^3$Department of Computer Science and Engineering\\\n Shanghai Jiao Tong University\\\n {jeffli, yanjunchi,xkyang,jinyh}@sjtu.edu.cn\ntitle: |\n Learning Interpretable Deep State Space Model for\\\n Probabilistic Time Series" +"---\nabstract: 'Scan chains provide increased controllability and observability for testing digital circuits. The increased testability, however, can also be a source of information leakage for sensitive designs. The state-of-the-art defenses to secure scan chains apply dynamic keys to pseudo-randomly invert the scan vectors. In this paper, we pinpoint an algebraic vulnerability of these dynamic defenses that involves creating and solving a system of linear equations over the finite field GF(2). In particular, we propose a novel GF(2)-based flush attack that breaks even the most rigorous version of state-of-the-art dynamic defenses. Our experimental results demonstrate that our attack recovers the key as long as 500 bits in less than 7 seconds, the attack times are about one hundredth of state-of-the-art SAT based attacks on the same defenses. We then demonstrate how our attacks can be extended to scan chains compressed with Multiple-Input Signature Registers (MISRs).'\nauthor:\n- 'Dake\u00a0Chen,\u00a0 Chunxiao\u00a0Lin, Peter\u00a0A.\u00a0Beerel,\u00a0'\ntitle: |\n GF-Flush: A GF(2) Algebraic Attack on\\\n Secure Scan Chains\\\n---\n\nHardware Security, Logic Locking, Dynamic Obfuscated Scan Chain, GF(2) Analysis, Algebraic Attack\n\nIntroduction {#sec:intro}\n============\n\nThe decentralized supply chain of modern integrated circuit (IC) design and manufacturing raises significant concern related to threats" +"---\nabstract: 'The symmetries of asymptotically flat spacetimes in general relativity are given by the Bondi-Metzner-Sachs (BMS) group, though there are proposed generalizations of its symmetry algebra. Associated with each symmetry is a charge and a flux, and the values of these charges and their changes can characterize a spacetime. The charges of the BMS group are relativistic angular momentum and supermomentum (which includes 4-momentum); the extensions of the BMS algebra also include generalizations of angular momentum called \u201csuper angular momentum.\u201d Several different formalisms have been used to define angular momentum, and they produce nonequivalent expressions for the charge. It was shown recently that these definitions can be summarized in a two-parameter family of angular momenta, which we investigate in this paper. We find that requiring that the angular momentum vanishes in flat spacetime restricts the two parameters to be equal. If we do not require that the angular momentum agrees with a common Hamiltonian definition, then we are left with a one-parameter family of angular momenta that includes the definitions from the several different formalisms. We then also propose a similar two-parameter family of super angular momentum. We examine the effect of the free parameters on the values of" +"---\nabstract: 'Large optical nonlinearities can have numerous applications, ranging from the generation of cat-states for optical quantum computation, through to quantum sensing where the sensitivity exceeds Heisenberg scaling in the resources. However, the generation of ultra-large optical nonlinearities has proved immensely challenging experimentally. We describe a novel protocol where one can effectively generate large optical nonlinearities via the conditional application of a [ linear operation]{} on an optical mode by an ancilla mode, followed by a measurement of the ancilla and corrective operation on the probe mode. Our protocol can generate high quality optical Schr[\u00f6]{}dinger cat states useful for optical quantum computing and can be used to perform sensing of an unknown rotation [or]{} displacement in phase space, with super-Heisenberg scaling in the resources. We finally describe a potential experimental implementation using atomic ensembles interacting with optical modes via the Faraday effect.'\nauthor:\n- 'Mattias T. Johnsson'\n- 'Pablo M. Poggi'\n- 'Marco A. Rodriguez'\n- 'Rafael N. Alexander'\n- Jason Twamley\nbibliography:\n- 'jasonsnonlinearmetrologymendeleygroup.bib'\ntitle: 'Generating nonlinearities from conditional linear operations, squeezing and measurement for quantum computation and super-Heisenberg sensing '\n---\n\nIntroduction\n============\n\nOptical nonlinearities, and in particular the Kerr nonlinear oscillator, have been the focus of" +"---\nabstract: 'We present results from an extensive search in the literature and *Gaia*\u00a0DR2 for visual co-moving binary companions to stars hosting exoplanets and brown dwarfs within 200\u00a0pc. We found 218 planet hosts out of the 938 in our sample to be part of multiple-star systems, with 10 newly discovered binaries and 2 new tertiary stellar components. This represents an overall raw multiplicity rate of $23.2\\pm1.6\\,\\%$ for hosts to exoplanets across all spectral types, with multi-planet systems found to have a lower stellar duplicity frequency at the 2.2-$\\sigma$ level. We found that more massive hosts are more often in binary configurations, and that planet-bearing stars in multiple systems are predominantly observed to be the most massive component of stellar binaries. Investigations of the multiplicity of planetary systems as a function of planet mass and separation revealed that giant planets with masses above 0.1\u00a0M$_\\mathrm{Jup}$ are more frequently seen in stellar binaries than small sub-Jovian planets with a 3.6-$\\sigma$ difference, a trend enhanced for the most massive ($>$7\u00a0M$_\\mathrm{Jup}$) short-period ($<$0.5\u00a0AU) planets and brown dwarf companions. Binarity was however found to have no significant effect on the demographics of low-mass planets ($<$0.1\u00a0M$_\\mathrm{Jup}$) or warm and cool gas" +"---\nauthor:\n- \n- \n- \n- \n- \ntitle: High performance reconciliation for practical quantum key distribution systems\n---\n\nIntroduction {#intro}\n============\n\nQuantum key distribution (QKD) is a promising technique for distributing unconditionally secure keys between remote parties in real time [@1_Bennett_2014]. Although QKD systems can theoretically contribute towards enhancing the security of the communication systems, their practical applications are constricted due to their low secure key rates and high costs [@2_Yuan_2018; @3_Duplinskiy_2018]. To address this issue, most research focused on the optimizations of the two major QKD layers, i.e., the so called photonic layer and post-processing layer [@2_Yuan_2018]. The photonic layer has in the past been considered the biggest impediment to improving the secure key rate. However, with the recent advances in single-photon detector technologies [@4_Boaron_2018], photonic integrated circuits [@5_Pirandola_2020] and other key technologies [@6_Lucamarini_2018; @7_Yin_2016], the performance bottleneck is gradually shifting to the post-processing layer [@2_Yuan_2018; @8_Zhang_2020]. Thus, as the major performance-limiting module in the post-processing layer, reconciliation has attracted extensive attentions [@9_Dixon_2014; @10_Wang_2018; @11_Gao_2019; @12_abd2020controlled]. In practice, the reconciliation module has two main performance metrics: efficiency (i.e. the ratio of actual transmitted information to the necessary amount of information) and throughput (i.e. the amount of data that can" +"---\nabstract: |\n We study the efficacy and efficiency of deep generative networks for approximating probability distributions. We prove that neural networks can transform a low-dimensional source distribution to a distribution that is arbitrarily close to a high-dimensional target distribution, when the closeness are measured by Wasserstein distances and maximum mean discrepancy. Upper bounds of the approximation error are obtained in terms of the width and depth of neural network. Furthermore, it is shown that the approximation error in Wasserstein distance grows at most linearly on the ambient dimension and that the approximation order only depends on the intrinsic dimension of the target distribution. On the contrary, when $f$-divergences are used as metrics of distributions, the approximation property is different. We show that in order to approximate the target distribution in $f$-divergences, the dimension of the source distribution cannot be smaller than the intrinsic dimension of the target distribution.\n\n **Keywords:** Deep ReLU networks; generative adversarial networks; approximation complexity; Wasserstein distance; maximum mean discrepancy.\nauthor:\n- 'Yunfei Yang [^1]'\n- 'Zhen Li [^2]'\n- Yang Wang\nbibliography:\n- 'references.bib'\ntitle: On the capacity of deep generative networks for approximating distributions\n---\n\nIntroduction\n============\n\nIn recent years, deep generative models have made" +"---\nabstract: |\n Classical work by Salmon and Bromwich classified singular intersections of two quadric surfaces. The basic idea of these results was already pursued by Cayley in connection with tangent intersections of conics in the plane and used by Sch\u00e4fli for the study of hyperdeterminants. More recently, the problem has been revisited with similar tools in the context of geometric modeling and a generalization to the case of two higher dimensional quadric hypersurfaces was given by Ottaviani. We propose and study a generalization of this question for systems of Laurent polynomials with support on a fixed point configuration.\n\n In the non-defective case, the closure of the locus of coefficients giving a non-degenerate multiple root of the system is defined by a polynomial called the [*mixed discriminant*]{}. We define a related polynomial called the multivariate [*iterated discriminant*]{}, generalizing the classical Sch\u00e4fli method for hyperdeterminants. This iterated discriminant is easier to compute and we prove that it is always divisible by the mixed discriminant. We show that tangent intersections can be computed via iteration if and only if the singular locus of a corresponding dual variety has sufficiently high codimension. We also study when point configurations corresponding to Segre-Veronese varieties and" +"---\nabstract: 'Artificial neural networks (ANNs) based machine learning models and especially deep learning models have been widely applied in computer vision, signal processing, wireless communications, and many other domains, where complex numbers occur either naturally or by design. However, most of the current implementations of ANNs and machine learning frameworks are using real numbers rather than complex numbers. There are growing interests in building ANNs using complex numbers, and exploring the potential advantages of the so called complex-valued neural networks (CVNNs) over their real-valued counterparts. In this paper, we discuss the recent development of CVNNs by performing a survey of the works on CVNNs in the literature. Specifically, detailed review of various CVNNs in terms of activation function, learning and optimization, input and output representations, and their applications in tasks such as signal processing and computer vision are provided, followed by a discussion on some pertinent challenges and future research directions.'\nauthor:\n- \nbibliography:\n- 'MLMVN-01-17-2021.bib'\ntitle: 'A Survey of Complex-Valued Neural Networks'\n---\n\ncomplex-valued neural networks; complex number; machine learning; deep learning\n\nIntroduction\n============\n\nArtificial neural networks (ANNs) are data-driven computing systems inspired by the dynamics and functionality of the human brain. With the advances in machine learning" +"---\nabstract: 'We present two classical algorithms for the simulation of universal quantum circuits on $n$ qubits constructed from $c$ instances of Clifford gates and $t$ arbitrary-angle $Z$-rotation gates such as $T$ gates. Our algorithms complement each other by performing best in different parameter regimes. The ${\\textsc{Estimate}}$ algorithm produces an additive precision estimate of the Born rule probability of a chosen measurement outcome with the only source of run-time inefficiency being a linear dependence on the stabilizer extent (which scales like $\\approx 1.17^t$ for $T$ gates). Our algorithm is state-of-the-art for this task: as an example, in approximately $13$ hours (on a standard desktop computer), we estimated the Born rule probability to within an additive error of $0.03$, for a $50$-qubit, $60$ non-Clifford gate quantum circuit with more than $2000$ Clifford gates. Our second algorithm, ${\\textsc{Compute}}$, calculates the probability of a chosen measurement outcome to machine precision with run-time ${O\\left(2^{t-r} t\\right)}$ where $r$ is an efficiently computable, circuit-specific quantity. With high probability, $r$ is very close to $\\min {\\left\\{t, n-w\\right\\}}$ for random circuits with many Clifford gates, where $w$ is the number of measured qubits. ${\\textsc{Compute}}$ can be effective in surprisingly challenging parameter regimes, e.g., we can randomly sample Clifford+$T$" +"---\nabstract: 'Pion and kaon structural properties provide insights into the emergence of mass within the Standard Model and attendant modulations by the Higgs boson. Novel expressions of these effects, in impact parameter space and in mass and pressure profiles, are exposed via $\\pi$ and $K$ generalised parton distributions, built using the overlap representation from light-front wave functions constrained by one-dimensional valence distribution functions that describe available data. Notably, *e.g*.\u00a0$K$ pressure profiles are spatially more compact than $\\pi$ profiles and both achieve near-core pressures of similar magnitude to that found in neutron stars.'\naddress:\n- ' Department of Physics, Nanjing Normal University, Nanjing, Jiangsu 210023, China'\n- ' School of Physics, Nankai University, Tianjin 300071, China'\n- 'Instituto de Ciencias Nucleares, Universidad Nacional Aut[\u00f3]{}noma de M[\u00e9]{}xico, Apartado Postal 70-543, C.P.\u00a004510, CDMX, M[\u00e9]{}xico'\n- ' School of Physics, Nanjing University, Nanjing, Jiangsu 210093, China'\n- ' Institute for Nonperturbative Physics, Nanjing University, Nanjing, Jiangsu 210093, China'\n- |\n Department of Integrated Sciences and Center for Advanced Studies in Physics, Mathematics and Computation, University of Huelva, E-21071 Huelva, Spain\\\n Email addresses: (L. Chang); (C. D. Roberts); (J. Rodr\u00edguez-Quintero) \nauthor:\n- 'J.-L. Zhang'\n- 'K. Raya'\n- 'L." +"---\nabstract: 'Ransomware is a growing threat that typically operates by either encrypting a victim\u2019s files or locking a victim\u2019s computer until the victim pays a ransom. However, it is still challenging to detect such malware timely with existing traditional malware detection techniques. In this paper, we present a novel ransomware detection system, called \u201cPeeler\u201d (****rofiling k****rn****l -****evel ****vents to detect ****ansomware). Peeler deviates from signatures for individual ransomware samples and relies on common and generic characteristics of ransomware depicted at the kernel-level. Analyzing diverse ransomware families, we observed ransomware\u2019s inherent behavioral characteristics such as stealth operations performed before the attack, file I/O request patterns, process spawning, and correlations among kernel-level events. Based on those characteristics, we develop Peeler that continuously monitors a target system\u2019s kernel events and detects ransomware attacks on the system. Our experimental results show that Peeler achieves more than 99% detection rate with 0.58% false-positive rate against 43 distinct ransomware families, containing samples from both crypto and screen-locker types of ransomware. For crypto ransomware, Peeler detects them promptly after only one file is lost (within 115 milliseconds on average). Peeler utilizes around 4.9% of CPU time with only 9.8 MB memory under the normal workload condition." +"---\nabstract: 'Generalized zero-shot learning (GZSL) is one of the most realistic but challenging problems due to the partiality of the classifier to supervised classes, especially under the class-inductive instance-inductive (CIII) training setting, where testing data are not available. Instance-borrowing methods and synthesizing methods solve it to some extent with the help of testing semantics, but therefore neither can be used under CIII. Besides, the latter require the training process of a classifier after generating examples. In contrast, a novel non-transductive regularization under CIII called **Semantic Borrowing (SB)** for improving GZSL methods with compatibility metric learning is proposed in this paper, which not only can be used for training linear models, but also nonlinear ones such as artificial neural networks. This regularization item in the loss function borrows similar semantics in the training set, so that the classifier can model the relationship between the semantics of zero-shot and supervised classes more accurately during training. In practice, the information of semantics of unknown classes would not be available for training while this approach does NOT need it. Extensive experiments on GZSL benchmark datasets show that SB can reduce the partiality of the classifier to supervised classes and improve the performance of" +"---\nabstract: 'We present the first results from SPHINX-MHD, a suite of cosmological radiation-magnetohydrodynamics simulations designed to study the impact of primordial magnetic fields (PMFs) on galaxy formation and the evolution of the intergalactic medium (IGM) during the epoch of reionization. The simulations are among the first to employ multi-frequency, on-the-fly radiation transfer and constrained transport ideal MHD in a cosmological context to simultaneously model the inhomogeneous process of reionization as well as the growth of primordial magnetic fields. We run a series of $(5\\,\\text{cMpc})^3$ cosmological volumes, varying both the strength of the seed magnetic field and its spectral index. We find that PMFs with a spectral index ($n_B$) and a comoving amplitude ($B_0$) that have $n_B > -0.562\\log_{10}\\left(\\frac{B_0}{1{\\rm n}G}\\right) - 3.35$ produce electron optical depths ($\\tau_e$) that are inconsistent with CMB constraints due to the unrealistically early collapse of low-mass dwarf galaxies. For $n_B\\geq-2.9$, our constraints are considerably tighter than the $\\sim{\\rm n}G$ constraints from Planck. PMFs that do not satisfy our constraints have little impact on the reionization history or the shape of the UV luminosity function. Likewise, detecting changes in the Ly$\\alpha$ forest due to PMFs will be challenging because photoionisation and photoheating efficiently smooth the density" +"---\nabstract: 'Due to their small mass, subsolar mass black hole binaries would have to be primordial in origin instead of the result of stellar evolution. Soon after formation in the early universe, primordial black holes can form binaries after decoupling from the cosmic expansion. Alternatively, primordial black holes as dark matter could also form binaries in the late universe due to dynamical encounters and gravitational-wave braking. A significant feature for this channel is the possibility that some sources retain nonzero eccentricity in the LIGO/Virgo band. Assuming all dark matter is primordial black holes with a delta function mass distribution, $1{M_\\odot}-1{M_\\odot}$ binaries formed in this late-universe channel can be detected by Advanced LIGO and Virgo with their design sensitivities at a rate of $\\mathcal{O}(1)$/year, where $12\\%(3\\%)$ of events have eccentricity at a gravitational-wave frequency of 10 Hz, $e^\\mathrm{10Hz}\\geq0.01(0.1)$, and nondetection can constrain the binary formation rate within this model. Third generation detectors would be expected to detect subsolar mass eccentric binaries as light as $0.01 {M_\\odot}$ within this channel, if they accounted for the majority of the dark matter. Furthermore, we use simulated gravitational-wave data to study the ability to search for eccentric gravitational-wave signals using a quasi-circular waveform template" +"---\nabstract: 'The progression of lung cancer implies the intrinsic ordinal relationship of lung nodules at different stages\u2014from *benign* to *unsure* then to *malignant*. This problem can be solved by ordinal regression methods, which is between classification and regression due to its ordinal label. However, existing convolutional neural network (CNN)-based ordinal regression methods only focus on modifying classification head based on a randomly sampled mini-batch of data, ignoring the ordinal relationship resided in the data itself. In this paper, we propose a Meta Ordinal Weighting Network (MOW-Net) to explicitly align each training sample with a meta ordinal set (MOS) containing a few samples from all classes. During the training process, the MOW-Net learns a mapping from samples in MOS to the corresponding class-specific weight. In addition, we further propose a meta cross-entropy (MCE) loss to optimize the network in a meta-learning scheme. The experimental results demonstrate that the MOW-Net achieves better accuracy than the state-of-the-art ordinal regression methods, especially for the unsure class.'\naddress: |\n $^1$Shanghai Key Lab of Intelligent Information Processing, School of Computer Science\\\n $^2$Institute of Science and Technology for Brain-inspired Intelligence\\\n Fudan University, Shanghai 200433, China \ntitle: Meta Ordinal Weighting Net for Improving Lung Nodule Classification\n---" +"---\nabstract: |\n Data-plane programmability is now mainstream. As we find more use cases, deployments need to be able to run multiple packet-processing modules in a single device. These are likely to be developed by independent teams, either within the same organization or from multiple organizations. Therefore, we need isolation mechanisms to ensure that modules on the same device do not interfere with each other.\n\n This paper presents [Menshen]{}, an extension of the Reconfigurable Match Tables (RMT) pipeline that enforces isolation between different packet-processing modules. [Menshen]{}is comprised of a set of lightweight hardware primitives and an extension to the open source P4-16 reference compiler that act in conjunction to meet this goal. We have prototyped [Menshen]{}on two FPGA platforms (NetFPGA and Corundum). We show that our design provides isolation, and allows new modules to be loaded without impacting the ones already running. Finally, we demonstrate the feasibility of implementing [Menshen]{}on ASICs by using the FreePDK45nm technology library and the Synopsys DC synthesis software, showing that our design meets timing at a 1 GHz clock frequency and needs approximately 6% additional chip area. We have open sourced the code for [Menshen]{}\u2019s hardware and software at []{}.\nauthor:\n- '[Tao Wang$^\\dagger$]{}'\n-" +"---\nabstract: 'We investigate nonequilibrium steady states in a class of one-dimensional diffusive systems that can attain negative absolute temperatures. The cases of a paramagnetic spin system, a Hamiltonian rotator chain and a one-dimensional discrete linear Schr\u00f6dinger equation are considered. Suitable models of reservoirs are implemented to impose given, possibly negative, temperatures at the chain ends. We show that a phenomenological description in terms of a Fourier law can consistently describe unusual transport regimes where the temperature profiles are entirely or partially in the negative-temperature region. Negative-temperature Fourier transport is observed both for deterministic and stochastic dynamics and it can be generalized to coupled transport when two or more thermodynamic currents flow through the system.'\nauthor:\n- Marco Baldovin\n- Stefano Iubini\nbibliography:\n- 'biblio.bib'\ntitle: 'Negative-temperature Fourier transport in one-dimensional systems'\n---\n\nIntroduction {#sec:Intro}\n============\n\nThe characterization of steady states of open classical and quantum systems is a central problem in physics, with many implications both for theoretical studies and for applications. In the context of nonequilibrium statistical mechanics, the study of transport problems in simple low-dimensional lattices is a field which has been deeply investigated in the last decades. Relevant examples include the discovery of nonequilibrium phase transitions" +"---\nabstract: 'We use dispersion-corrected density-functional theory to determine the relative energies of competing polytypes of bulk layered hexagonal post-transition-metal chalcogenides, to search for the most stable structures of these potentially technologically important semiconductors. We show that there is some degree of consensus among dispersion-corrected exchange-correlation functionals regarding the energetic orderings of polytypes, but we find that for each material there are multiple stacking orders with relative energies of less than 1 meV per monolayer unit cell, implying that stacking faults are expected to be abundant in all post-transition-metal chalcogenides. By fitting a simple model to all our energy data, we predict that the most stable hexagonal structure has P$6_3$/mmc space group in each case, but that the stacking order differs between GaS, GaSe, GaTe, and InS on the one hand and InSe and InTe on the other. At zero pressure, the relative energies obtained with different functionals disagree by around 1\u20135 meV per monolayer unit cell, which is not sufficient to identify the most stable structure unambiguously; however, multi-GPa pressures reduce the number of competing phases significantly. At higher pressures, an AB$''$-stacked structure of the most stable monolayer polytype is found to be the most stable bulk structure; this" +"---\nabstract: 'The aim of this article is to provide characterizations for subadditivity-like growth conditions for the so-called associated weight functions in terms of the defining weight sequence. Such growth requirements arise frequently in the literature and are standard when dealing with ultradifferentiable function classes defined by Braun-Meise-Taylor weight functions since they imply or even characterize important and desired consequences for the underlying function spaces, e.g. closedness under composition.'\naddress: 'G.\u00a0Schindl: Fakult\u00e4t f\u00fcr Mathematik, Universit\u00e4t Wien, Oskar-Morgenstern-Platz\u00a01, A-1090 Wien, Austria.'\nauthor:\n- Gerhard Schindl\nbibliography:\n- 'Bibliography.bib'\ntitle: 'On subadditivity-like conditions for associated weight functions'\n---\n\n[^1]\n\nIntroduction {#Introduction}\n============\n\nIn the theory of ultradifferentiable function spaces there exist two classical, in general distinct (see [@BonetMeiseMelikhov07]), approaches in order to control the growth of the derivatives of the functions belonging to such classes: Either one uses a weight sequence $M=(M_p)_p$ or a weight function $\\omega:[0,+\\infty)\\rightarrow[0,+\\infty)$. In both settings one requires several basic growth and regularity assumptions on $M$ and $\\omega$ and one distinguishes between two types, the [*Roumieu-type spaces*]{} $\\mathcal{E}_{\\{M\\}}$ and $\\mathcal{E}_{\\{\\omega\\}}$, and the [*Beurling-type spaces*]{} $\\mathcal{E}_{(M)}$ and $\\mathcal{E}_{(\\omega)}$. In the following we write $\\mathcal{E}_{[\\star]}$ if we mean either $\\mathcal{E}_{\\{\\star\\}}$ or $\\mathcal{E}_{(\\star)}$, but not mixing the cases.\n\nSubadditivity-like" +"---\nabstract: 'We propose a method for tracing implicit real algebraic curves defined by polynomials with rank-deficient Jacobians. For a given curve $f^{-1}(0)$, it first utilizes a regularization technique to compute at least one witness point per connected component of the curve. We improve this step by establishing a sufficient condition for testing the emptiness of $f^{-1}(0)$. We also analyze the convergence rate and carry out an error analysis for refining the witness points. The witness points are obtained by computing the minimum distance of a random point to a smooth manifold embedding the curve while at the same time penalizing the residual of $f$ at the local minima. To trace the curve starting from these witness points, we prove that if one drags the random point along a trajectory inside a tubular neighborhood of the embedded manifold of the curve, the projection of the trajectory on the manifold is unique and can be computed by numerical continuation. We then show how to choose such a trajectory to approximate the curve by computing eigenvectors of certain matrices. Effectiveness of the method is illustrated by examples.'\nauthor:\n- 'Wenyuan Wu[^1]'\n- 'Changbo Chen[^2]'\ntitle: 'A Companion Curve Tracing Method for Rank-deficient" +"---\nabstract: 'Interactive speech recognition systems must generate words quickly while also producing accurate results. Two-pass models excel at these requirements by employing a first-pass decoder that quickly emits words, and a second-pass decoder that requires more context but is more accurate. Previous work has established that a deliberation network can be an effective second-pass model. The model attends to two kinds of inputs at once: encoded audio frames and the hypothesis text from the first-pass model. In this work, we explore using transformer layers instead of long-short term memory (LSTM) layers for deliberation rescoring. In transformer layers, we generalize the \u201cencoder-decoder\" attention to attend to both encoded audio and first-pass text hypotheses. The output context vectors are then combined by a merger layer. Compared to LSTM-based deliberation, our best transformer deliberation achieves 7% relative word error rate improvements along with a 38% reduction in computation. We also compare against non-deliberation transformer rescoring, and find a 9% relative improvement.'\naddress: |\n Google, Inc., USA\\\n {huk,rpang,tsainath,strohman}@google.com\ntitle: 'Transformer Based Deliberation for Two-Pass Speech Recognition'\n---\n\nTransformer, deliberation network, rescoring, two-pass automatic speech recognition\n\nIntroduction {#sec:intro}\n============\n\nEnd-to-end (E2E) automatic speech recognition (ASR) has made rapid progress in recent years [@graves2012sequence; @chorowski2015attention;" +"---\nabstract: 'Let $n \\ge 2$ be an integer and consider the defining ideal of the Fermat configuration of points in ${{\\mathbb P}}^2$: $I_n=(x(y^n-z^n),y(z^n-x^n),z(x^n-y^n)) \\subset R={{\\mathbb C}}[x,y,z]$. In this paper, we compute explicitly the least degree of generators of its symbolic powers in all unknown cases. As direct applications, we easily verify Chudnovsky\u2019s Conjecture, Demailly\u2019s Conjecture and Harbourne-Huneke Containment problem as well as calculating explicitly the Waldschmidt constant and (asymptotic) resurgence number.'\naddress:\n- |\n Tulane University\\\n Department of Mathematics\\\n 6823 St. Charles Ave.\\\n New Orleans, LA 70118, USA\n- 'University of Education, Hue University, 34 Le Loi St., Hue, Viet Nam'\nauthor:\n- 'Th\u00e1i Th\u00e0nh Nguy$\\tilde{\\text{\\^e}}$n'\nbibliography:\n- 'References.bib'\ntitle: The Initial Degree of Symbolic Powers of Ideals of Fermat Configuration of Points\n---\n\nIntroduction {#sec.intro}\n============\n\nLet $n \\ge 2$ be an integer and consider the *Fermat ideal* $$I_n=(x(y^n-z^n),y(z^n-x^n),z(x^n-y^n)) \\subset R={{\\mathbb C}}[x,y,z].$$\n\nThis ideal corresponds to Fermat arrangement of lines (or Ceva arrangement in some literature) in ${{\\mathbb P}}^2$, more precisely, the variety of $I_n$ is a reduced set of $n^2+3$ points in ${{\\mathbb P}}^2$ [@HaSeFermat], where $n^2$ of these points form the intersection locus of the pencil of curves spanned by $x^n- y^n$ and $x^n-z^n$, while the" +"---\nabstract: |\n We propose probabilistic models to bound the forward error in the numerically computed sum of a vector with $n$ real elements. To do so, we generate our own deterministic bound for ease of comparison, and then create a model of our errors to generate probabilistic bounds of a comparable structure that can typically be computed alongside the actual computation of the sum.\\\n The errors are represented as bounded, independent, zero-mean random variables. We have found that our accuracy is increased when we use vectors that do not sum to zero. This accuracy reaches to be within 1 order of magnitude when all elements are of the same sign. We also show that our bounds are informative for most cases of IEEE half precision and all cases of single precision numbers for a for a vector of dimension, $n \\leq 10^7$.\\\n Our numerical experiments confirm that the probabilistic bounds are tighter by 2 to 3 orders of magnitude than their deterministic counterparts for dimensions of at least 80 with extremely small failure probabilities. The experiments also confirm that our bounds are much more accurate for vectors consisting of elements of the same sign.\nauthor:\n- 'Johnathan Rhyne[^1]'\nbibliography:" +"---\nabstract: 'We consider a one-dimensional model allowing analytical derivation of the effective interactions between two charged colloids. We evaluate exactly the partition function for an electroneutral salt-free suspension with dielectric jumps at the colloids\u2019 position. We derive a contact relation with the pressure that shows there is like-charge attraction, whether or not the counterions are confined between the colloids. In contrast to the homogeneous dielectric case, there is the possibility for the colloids to attract despite the number of counter-ions ($N$) being even. The results are shown to recover the mean-field prediction in the limit $N\\to \\infty$.'\nauthor:\n- Lucas Varela\n- Gabriel T\u00e9llez\n- Emmanuel Trizac\nbibliography:\n- 'refs.bib'\ntitle: 'One-dimensional colloidal model with dielectric inhomogeneity'\n---\n\n@pre @post\n\nIntroduction\n============\n\nElectrostatic interactions are key to a wealth of phenomena in soft condensed matter: like-charge attraction, overcharging/charge inversion, self-assembly, electrophoresis, etc [@Holm2001; @andelman2006; @Levin2002; @Naji2005; @Boroudjerdi2005; @Ioannidou2016]. Nonetheless, understanding many-body correlated interactions from a fundamental point of view is usually shielded by mathematical complexities that can only be bypassed with physical insight. Take for example one of the simplest possible settings: two similar charged plates interacting in the presence of neutralizing counter-ions. For high counter-ion valency and/or large" +"---\nabstract: 'In the application of neural networks, we need to select a suitable model based on the problem complexity and the dataset scale. To analyze the network\u2019s capacity, quantifying the information learned by the network is necessary. This paper proves that the distance between the neural network weights in different training stages can be used to estimate the information accumulated by the network in the training process directly. The experiment results verify the utility of this method. An application of this method related to the label corruption is shown at the end.'\nauthor:\n- 'Liqun Yang ^1^ Yijun Yang ^2^ Yao Wang ^2^ Zhenyu Yang ^2^ Wei Zeng ^2^\\'\ntitle: 'THE DISTANCE BETWEEN THE WEIGHTS OF THE NEURAL NETWORK IS MEANINGFUL\\'\n---\n\n=1\n\nIntroduction\n============\n\nSince Dr. Hebb opened the door of machine learning in [@hebb1949organization], people have obtained endless wealth from it. At the beginning of this century, neural networks\u2019 potential in machine learning tasks was discovered in many fields. With more and more people noticing this delicate model\u2019s power, applications based on the neural network develop rapidly and change the world gradually [@rumelhart1986learning; @hochreiter1997long; @fukushima1980neocognitron; @lecun2015deep; @hinton1994autoencoders; @sajjadi2017enhancenet; @he2017mask; @an2015variational; @arjovsky2017wasserstein], which makes people eager to reveal" +"---\nabstract: 'Alkali metal dosing (AMD) has been widely used as a way to control doping without chemical substitution. This technique, in combination with angle resolved photoemission spectroscopy (ARPES), often provides an opportunity to observe unexpected phenomena. However, the amount of transferred charge and the corresponding change in the electronic structure vary significantly depending on the material. Here, we report study on the correlation between the sample work function and alkali metal induced electronic structure change for three iron-based superconductors: FeSe, Ba(Fe$_{0.94}$Co$_{0.06}$)$_{2}$As$_{2}$ and NaFeAs which share a similar Fermi surface topology. Electronic structure change upon monolayer of alkali metal dosing and the sample work function were measured by ARPES. Our results show that the degree of electronic structure change is proportional to the difference between the work function of the sample and Mulliken\u2019s absolute electronegativity of the dosed alkali metal. This finding provides a possible way to estimate the AMD induced electronic structure change.'\nauthor:\n- Saegyeol Jung\n- Yukiaki Ishida\n- Minsoo Kim\n- Masamichi Nakajima\n- Shigeyuki Ishida\n- Hiroshi Eisaki\n- Woojae Choi\n- Yong Seung Kwon\n- Jonathan Denlinger\n- Toshio Otsu\n- Yohei Kobayashi\n- Soonsang Huh\n- Changyoung Kim\ntitle: Effect of the sample" +"---\nabstract: 'We have investigated the Kondo physics of a single magnetic impurity embedded in multi-Dirac (Weyl) node fermionic systems. By using a generic effective model for the host material and employing a numerical renormalization group approach we access the low temperature behavior of the system, identifying the existence of Kondo screening in single-, double-, and triple-Dirac (Weyl) node models. We find that in any multi-Dirac node systems the low-energy regime lies within one of the known classes of pseudogap Kondo problem, extensively studied in the literature. Kondo screening is also observed for time reversal symmetry broken Weyl systems. This is, however, possible only in the particle-hole symmetry broken regime obtained for finite chemical potential $\\mu$. Although weakly, breaking time-reversal symmetry suppresses the Kondo resonance, especially in the single-node Weyl semimetals. More interesting Kondo screening regimes are obtained for inversion symmetry broken multi-Weyl fermions. In these systems the Kondo regimes of double- and triple-Weyl node models are much richer than in the single-Weyl node model. While in the single-Weyl node model the Kondo temperature increases monotonically with $|\\mu|$ regardless the value of the inversion symmetry breaking parameter $Q_0$, in double- and triple-Weyl node models there are two distinct regimes: (i)" +"---\nabstract: 'We study the phenomenon of gravitational particle production as applied to a scalar spectator field in the context of $\\alpha$-attractor inflation. Assuming that the scalar has a minimal coupling to gravity, we calculate the abundance of gravitationally-produced particles as a function of the spectator\u2019s mass $m_\\chi$ and the inflaton\u2019s $\\alpha$ parameter. If the spectator is stable and sufficiently weakly coupled, such that it does not thermalize after reheating, then a population of spin-0 particles is predicted to survive in the universe today, providing a candidate for dark matter. Inhomogeneities in the spatial distribution of dark matter correspond to an isocurvature component, which can be probed by measurements of the cosmic microwave background anisotropies. We calculate the dark matter-photon isocurvature power spectrum and by comparing with upper limits from *Planck*, we infer constraints on $m_\\chi$ and $\\alpha$. If the scalar spectator makes up all of the dark matter today, then for $\\alpha = 10$ and $T_{\\text{\\sc rh}}= 10^4 {\\ \\mathrm{GeV}}$ we obtain $m_\\chi > 1.8 \\times 10^{13} {\\ \\mathrm{GeV}}\\approx 1.2 \\, m_\\phi$, where $m_\\phi$ is the inflaton\u2019s mass.'\nauthor:\n- 'Siyang Ling and Andrew J. Long'\nbibliography:\n- 'alpha\\_attractor.bib'\ndate: ' *Department of Physics and Astronomy, Rice University, Houston," +"---\nauthor:\n- 'C.\u00a0L\u00f3pez-Sanjuan'\n- 'H.\u00a0Yuan'\n- 'H.\u00a0V\u00e1zquez Rami\u00f3'\n- 'J.\u00a0Varela'\n- 'D.\u00a0Crist\u00f3bal-Hornillos'\n- 'P.\u00a0-E.\u00a0Tremblay'\n- 'A.\u00a0Mar\u00edn-Franch'\n- 'A.\u00a0J.\u00a0Cenarro'\n- 'A.\u00a0Ederoclite'\n- 'E.\u00a0J.\u00a0Alfaro'\n- 'A.\u00a0Alvarez-Candal'\n- 'S.\u00a0Daflon'\n- 'A.\u00a0Hern\u00e1n-Caballero'\n- 'C.\u00a0Hern\u00e1ndez-Monteagudo'\n- 'F.\u00a0M.\u00a0Jim\u00e9nez-Esteban'\n- 'V.\u00a0M.\u00a0Placco'\n- 'E.\u00a0Tempel'\n- 'J.\u00a0Alcaniz'\n- 'R.\u00a0E.\u00a0Angulo'\n- 'R.\u00a0A.\u00a0Dupke'\n- 'M.\u00a0Moles'\n- 'L.\u00a0Sodr\u00e9 Jr.'\nbibliography:\n- 'biblio.bib'\ndate: 'Received 29 January 2021 / Accepted 13 July 2021'\ntitle: 'J-PLUS: Systematic impact of metallicity on photometric calibration with the stellar locus'\n---\n\nIntroduction {#sec:intro}\n============\n\nOne fundamental step in the data processing of any imaging survey is the photometric calibration of the observations. The calibration process aims to translate the observed counts in astronomical images to a physical flux scale referred to the top of the atmosphere. Because accurate colors are needed to derive photometric redshifts for galaxies, atmospheric parameters for Milky Way (MW) stars, and surface characteristics for minor bodies; and reliable absolute fluxes are involved in the estimation of the luminosity and the stellar mass of galaxies, current and future photometric surveys target a calibration uncertainty" +"---\nabstract: 'A complex unit gain graph ($ \\mathbb{T} $-gain graph), $ \\Phi=(G, \\varphi) $ is a graph where the function $ \\varphi $ assigns a unit complex number to each orientation of an edge of $ G $, and its inverse is assigned to the opposite orientation. In this article, we propose gain distance matrices for $ \\mathbb{T} $-gain graphs. These notions generalize the corresponding known concepts of distance matrices and signed distance matrices. Shahul K. Hameed et al. introduced signed distance matrices and developed their properties. Motivated by their work, we establish several spectral properties, including some equivalences between balanced $ \\mathbb{T} $-gain graphs and gain distance matrices. Furthermore, we introduce the notion of positively weighted $ \\mathbb{T} $-gain graphs and study some of their properties. Using these properties, Acharya\u2019s and Stani\u0107\u2019s spectral criteria for balance are deduced. Moreover, the notions of order independence and distance compatibility are studied. Besides, we obtain some characterizations for distance compatibility.'\nauthor:\n- 'Aniruddha Samanta [^1]\u00a0'\n- 'M. Rajesh Kannan[^2]'\nbibliography:\n- 'raj-ani-ref1.bib'\ntitle: Gain distance matrices for complex unit gain graphs\n---\n\n=0.25in\n\n[**Mathematics Subject Classification(2010):**]{} 05C22(primary); 05C50, 05C35(secondary).\n\n**Keywords.** Complex unit gain graph, Signed distance matrix, Distance matrix, Adjacency" +"---\nabstract: 'We present the spatial analysis of five Compton thick (CT) active galactic nuclei (AGNs), including MKN 573, NGC 1386, NGC 3393, NGC 5643, and NGC 7212, for which high resolution *Chandra* observations are available. For each source, we find hard X-ray emission ($>$3 keV) extending to $\\sim$kpc scales along the ionization cone, and for some sources, in the cross-cone region. This collection represents the first, high-signal sample of CT AGN with extended hard X-ray emission for which we can begin to build a more complete picture of this new population of AGN. We investigate the energy dependence of the extended X-ray emission, including possible dependencies on host galaxy and AGN properties, and find a correlation between the excess emission and obscuration, suggesting a connection between the nuclear obscuring material and the galactic molecular clouds. Furthermore, we find that the soft X-ray emission extends farther than the hard X-rays along the ionization cone, which may be explained by a galactocentric radial dependence on the density of molecular clouds due to the orientation of the ionization cone with respect to the galactic disk. These results are consistent with other CT AGN with observed extended hard X-ray emission (e.g., ESO 428-G014" +"---\nabstract: 'Quasi-periodic fast propagating (QFP) waves are often excited by solar flares, and could be trapped in the coronal structure with low [Alfv\u00e9n]{}speed, so they could be used as a diagnosing tool for both the flaring core and magnetic waveguide. As the periodicity of a QFP wave could originate from a periodic source or be dispersively waveguided, it is a key parameter for diagnosing the flaring core and waveguide. In this paper, we study two QFP waves excited by a [*GOES*]{}-class C1.3 solar flare occurring at active region NOAA 12734 on 8 March 2019. Two QFP waves were guided by two oppositely oriented coronal funnel. The periods of two QFP waves were identical and were roughly equal to the period of the oscillatory signal in the X-ray and 17 GHz radio emission released by the flaring core. It is very likely that the two QFP waves could be periodically excited by the flaring core. Many features of this QFP wave event is consistent with the magnetic tuning fork model. We also investigated the seismological application with QFP waves, and found that the magnetic field inferred with magnetohydrodynamic seismology was consistent with that obtained in magnetic extrapolation model. Our study" +"---\nabstract: 'Understanding common envelope (CE) evolution is an outstanding problem in binary evolution. Although the CE phase is not driven by gravitational-wave (GW) emission, the in-spiraling binary emits GWs that passively trace the CE dynamics. Detecting this GW signal would provide direct insight on the gas-driven physics. Even a non-detection might offer invaluable constraints. We investigate the prospects of detection of a Galactic CE by LISA. While the dynamical phase of the CE is likely sufficiently loud for detection, it is short and thus rare. We focus instead on the self-regulated phase that proceeds on a thermal timescale. Based on population synthesis calculations and the (unknown) signal duration in the LISA band, we expect $\\sim 0.1-100$ sources in the Galaxy during the mission duration. We map the GW observable parameter space of frequency $f_\\mathrm{GW}$ and its derivative $\\dot f_\\mathrm{GW}$ remaining agnostic on the specifics of the inspiral, and find that signals with $\\mathrm{SNR}>10$ are possible if the CE stalls at separations such that $f_\\mathrm{GW}\\gtrsim2\\times10^{-3}\\,\\mathrm{Hz}$. We investigate the possibility of mis-identifying the signal with other known sources. If the second derivative $\\ddot f_\\mathrm{GW}$ can also be measured, the signal can be distinguished from other sources using a GW braking-index. Alternatively," +"---\nabstract: |\n We introduce quantum-K ($QK$), a measure of the descriptive complexity of density matrices using classical prefix-free Turing machines and show that the initial segments of weak Solovay random and quantum Schnorr random states are incompressible in the sense of $QK$. Many properties enjoyed by prefix-free Kolmogorov complexity ($K$) have analogous versions for $QK$; notably a counting condition.\n\n Several connections between Solovay randomness and $K$, including the Chaitin type characterization of Solovay randomness, carry over to those between weak Solovay randomness and $QK$. We work towards a LevinSchnorr type characterization of weak Solovay randomness in terms of $QK$.\n\n Schnorr randomness has a LevinSchnorr characterization using $K_C$; a version of $K$ using a computable measure machine, $C$. We similarly define $QK_C$, a version of $QK$. Quantum Schnorr randomness is shown to have a LevinSchnorr and a Chaitin type characterization using $QK_C$. The latter implies a Chaitin type characterization of classical Schnorr randomness using $K_C$.\naddress: 'Department of Mathematics, University of Wisconsin\u2013Madison, 480 Lincoln Dr., Madison, WI 53706, USA'\nauthor:\n- Tejas Bhojraj\nbibliography:\n- 'references.bib'\ntitle: ' Prefix-free quantum Kolmogorov complexity'\n---\n\nIntroduction and Overview\n=========================\n\nThe theory of computation has been extended to the quantum setting; a notable" +"---\nauthor:\n- 'Nathan T. James, Frank E. Harrell Jr., Bryan E. Shepherd'\ndate: '2022-01-07'\ntitle: Bayesian Cumulative Probability Models for Continuous and Mixed Outcomes\n---\n\n=1\n\nOrdinal cumulative probability models (CPMs) \u2013 also known as cumulative link models \u2013 such as the proportional odds regression model are typically used for discrete ordered outcomes, but can accommodate both continuous and mixed discrete/continuous outcomes since these are also ordered. Recent papers describe ordinal CPMs in this setting using non-parametric maximum likelihood estimation. We formulate a Bayesian CPM for continuous or mixed outcome data. Bayesian CPMs inherit many of the benefits of frequentist CPMs and have advantages with regard to interpretation, flexibility, and exact inference (within simulation error) for parameters and functions of parameters. We explore characteristics of the Bayesian CPM through simulations and a case study using HIV biomarker data. In addition, we provide the package `bayesCPM` which implements Bayesian CPM models using the `R` interface to the Stan probabilistic programing language. The Bayesian CPM for continuous outcomes can be implemented with only minor modifications to the prior specification and, despite some limitations, has generally good statistical performance with moderate or large sample sizes.\n\nCumulative probability models for ordinal outcomes \u2013" +"---\nabstract: |\n This article describes techniques employed in the production of a synthetic dataset of driver telematics emulated from a similar real insurance dataset. The synthetic dataset generated has 100,000 policies that included observations about driver\u2019s claims experience together with associated classical risk variables and telematics-related variables. This work is aimed to produce a resource that can be used to advance models to assess risks for usage-based insurance. It follows a three-stage process using machine learning algorithms. The first stage is simulating values for the number of claims as multiple binary classifications applying feedforward neural networks. The second stage is simulating values for aggregated amount of claims as regression using feedforward neural networks, with number of claims included in the set of feature variables. In the final stage, a synthetic portfolio of the space of feature variables is generated applying an extended `SMOTE` algorithm. The resulting dataset is evaluated by comparing the synthetic and real datasets when Poisson and gamma regression models are fitted to the respective data. Other visualization and data summarization produce remarkable similar statistics between the two datasets. We hope that researchers interested in obtaining telematics datasets to calibrate models or learning algorithms will find our" +"---\nauthor:\n- 'C.\u00a0Welling'\n- 'P.\u00a0Frank'\n- 'T.\u00a0En\u00dflin'\n- 'A.\u00a0Nelles'\nbibliography:\n- 'references.bib'\ntitle: 'Reconstructing non-repeating radio pulses with Information Field Theory'\n---\n\n[ !a! @toks= @toks= ]{} [ !b! @toks= @toks= ]{} [ !c! @toks= @toks= ]{} [ !d! @toks= @toks= ]{}\n\n[@counter>0@toks=@toks=]{}\n\n[abstract[ Particle showers in dielectric media produce radio signals which are used for the detection of both ultra-high energy cosmic rays and neutrinos with energies above a few PeV. The amplitude, polarization, and spectrum of these short, broadband radio pulses allow us to draw conclusions about the primary particles that caused them, as well as the mechanics of shower development and radio emission. However, confidently reconstructing the radio signals can pose a challenge, as they are often obscured by background noise. Information Field Theory offers a robust approach to this challenge by using Bayesian inference to calculate the most likely radio signal, given the recorded data. In this paper, we describe the application of Information Field Theory to radio signals from particle showers in both air and ice and demonstrate how accurately pulse parameters can be obtained from noisy data. ]{}]{}\n\nIntroduction\n============\n\nThe origin of the most energetic cosmic rays is" +"---\nabstract: '*Abstract*\u2014Jones calculus provides a robust and straightforward method to characterize polarized light and polarizing optical systems using two-element vectors (Jones vectors) and $2 \\times 2$ matrices (Jones matrices). Jones matrices are used to determine the retardance and diattenuation introduced by an optical element or a sequence of elements. Moreover, they are the tool of choice to study optical geometric phases. However, the current sampling method for characterizing the Jones matrix of an optical element is inefficient, since the search space of the problem is in the realm of the real numbers and so applying a general sampling method is time-consuming. In this study, we present an initial approach for solving the problem of finding the eigenvectors that characterize the Jones matrix of a homogeneous optical element through Evolutionary Algorithms (EAs). We evaluate the analytical performance of an EA with a Polynomial Mutation operator and a Genetic Algorithm (GA) with a Simulated Binary crossover operator and a Polynomial Mutation operator, and compare the results with those obtained through a general sampling method. The results show that both the EA and the GA out-performed a general sampling method of 6,000 measurements, by requiring in average 103 and 188 fitness functions" +"---\nabstract: 'Sponsored search auction is a crucial component of modern search engines. It requires a set of candidate bidwords that advertisers can place bids on. Existing methods generate bidwords from search queries or advertisement content. However, they suffer from the data noise in <query, bidword> and <advertisement, bidword> pairs. In this paper, we propose a triangular bidword generation model (TRIDENT), which takes the high-quality data of paired <query, advertisement> as a supervision signal to indirectly guide the bidword generation process. Our proposed model is simple yet effective: by using bidword as the bridge between search query and advertisement, the generation of search query, advertisement and bidword can be jointly learned in the triangular training framework. This alleviates the problem that the training data of bidword may be noisy. Experimental results, including automatic and human evaluations, show that our proposed [TRIDENT]{}\u00a0can generate relevant and diverse bidwords for both search queries and advertisements. Our evaluation on online real data validates the effectiveness of the [TRIDENT]{}\u2019s generated bidwords for product search.'\nauthor:\n- 'Zhenqiao Song, Jiaze Chen, Hao Zhou, Lei Li'\nbibliography:\n- 'main.bib'\ntitle: |\n Triangular Bidword Generation for Sponsored\\\n Search Auction\n---\n\n[UTF8]{}[gbsn]{}\n\n<ccs2012> <concept> <concept\\_id>10002951.10003260.10003272.10003273</concept\\_id> <concept\\_desc>Information systems\u00a0Sponsored" +"---\nabstract: 'Clustering algorithms partition a dataset into groups of similar points. The clustering problem is very general, and different partitions of the same dataset could be considered correct and useful. To fully understand such data, it must be considered at a variety of scales, ranging from coarse to fine. We introduce the Multiscale Environment for Learning by Diffusion (MELD) data model, which is a family of clusterings parameterized by nonlinear diffusion on the dataset. We show that the MELD data model precisely captures latent multiscale structure in data and facilitates its analysis. To efficiently learn the multiscale structure observed in many real datasets, we introduce the Multiscale Learning by Unsupervised Nonlinear Diffusion (M-LUND) clustering algorithm, which is derived from a diffusion process at a range of temporal scales. We provide theoretical guarantees for the algorithm\u2019s performance and establish its computational efficiency. Finally, we show that the M-LUND clustering algorithm detects the latent structure in a range of synthetic and real datasets.'\nauthor:\n- 'James M. Murphy'\n- 'Sam L. Polk[^1]'\nbibliography:\n- 'biblio.bib'\ntitle: A Multiscale Environment for Learning by Diffusion\n---\n\n**Keywords:** clustering; diffusion geometry; hierarchical clustering; machine learning; spectral graph theory;\n\nIntroduction {#sec: Intro 1}\n============\n\nUnsupervised" +"---\nauthor:\n- 'Grzegorz Szamel[^1]'\n- Elijah Flenner\ntitle: 'Long-ranged velocity correlations in dense systems of self-propelled particles'\n---\n\nIntroduction \n=============\n\nA quickly growing field is the study of active matter systems [@Ramaswamy2010; @Marchetti2013; @Vicsek2012; @Bechinger2016; @Elgeti2015; @Needleman2017; @Gompper2020]. Individual components of these systems perform persistent motion due to the injection (consumption) of energy from their environment. Examples include cell assemblies [@Petitjean2010; @Angelini2011; @Basan2013; @Garcia2015; @Blanch2018; @Henkes2020], bird flocks [@Vicsek2012], bacterial suspensions [@Dombrowski2004; @Peruani2012; @Wensink2012; @Dunkel2013; @Wioland2016; @Urzay2017; @James2018], and self-propelled colloids [@Howse2007; @Tierno2008; @Gosh2009; @Palacci2010; @Jiang2010; @Michelin2013; @Dai2016; @Moran2017]. Active matter systems exhibit many properties absent in equilibrium thermal systems, *e.g.* they may undergo a phase separation of liquid-gas type in the absence of any attractive interactions [@Cates2015].\n\nOne interesting property, first demonstrated experimentally by Garcia *et al.* [@Garcia2015], is the presence of equal-time velocity correlations. These correlations are absent in classical equilibrium systems. It has been recognized for some time [@Szamel2015; @Marconi2016; @Flenner2016] that such non-trivial equal-time velocity correlations are present in simple microscopic models of active matter systems, *i.e.* in systems of self-propelled particles. These correlations are an *emergent property* of these systems, *i.e.* they appear spontaneously, without any explicit velocity-aligning interactions. Recently, two groups [@Henkes2020; @Caprini2020a]" +"---\nabstract: 'Galaxy clusters exhibit a rich morphology during the early and intermediate stages of mass assembly, especially beyond their boundary. A classification scheme based on shapefinders deduced from the Minkowski functionals is examined to fully account for the morphological diversity of galaxy clusters, including relaxed and merging clusters, clusters fed by filamentary structures, and cluster-pair bridges. These configurations are conveniently treated with idealised geometric models and analytical formulae, some of which are novel. Examples from CLASH and LC$^2$ clusters and observed cluster-pair bridges are discussed.'\nauthor:\n- |\n C. Schimd,$^1$[^1] M. Sereno$^{2,3}$\\\n $^{1}$ Aix Marseille Univ, CNRS, CNES, LAM, Marseille, France\\\n $^{2}$ INAF \u2013 Osservatorio di Astrofisica e Scienza dello Spazio di Bologna, via Piero Gobetti 93/3, I-40129 Bologna, Italy\\\n $^{3}$ INFN, Sezione di Bologna, viale Berti Pichat 6/2, 40127 Bologna, Italy\nbibliography:\n- 'MergingClusters\\_MinkowskiFunctionals.bib'\ndate: 'Accepted 2021 January 25. Received 2021 January 19; in original form 2020 July 23'\ntitle: |\n Morphology of relaxed and merging galaxy clusters.\\\n Analytical models for monolithic Minkowski functionals\n---\n\n\\[firstpage\\]\n\ngalaxies: clusters: general \u2013 cosmology: observations\n\nIntroduction {#sec:intro}\n============\n\nMorphology of galaxy clusters is an indicator of their state of relaxation and can be used to infer their formation history and evolution." +"---\nabstract: 'A two-type two-sex branching process is introduced with the aim of describing the interaction of predator and prey populations with sexual reproduction and promiscuous mating. In each generation and in each species the total number of individuals which mate and produce offspring is controlled by a binomial distribution with size given by this number of individuals and probability of success depending on the density of preys per predator. The resulting model enables us to depict the typical cyclic behaviour of predator-prey systems under some mild assumptions on the shape of the function that characterises the probability of survival of the previous binomial distribution. We present some basic results about fixation and extinction of both species as well as conditions for the coexistence of both of them. We also analyse the suitability of the process to model real ecosystems comparing our model with a real dataset.'\nauthor:\n- 'Cristina Guti\u00e9rrez[^1]\u00a0[^2]'\n- 'Carmen Minuesa\u00a0 [^3]'\ntitle: 'A two-sex branching process with oscillations: application to predator-prey systems'\n---\n\n[**Keywords:** ]{}[predator-prey model; two-sex branching process; oscillations; promiscuous mating; extinction; coexistence; density dependence.]{}\n\n[**MSC:** ]{}[60J80, 60J85.]{}\n\nIntroduction {#sec:Introduction}\n============\n\nRecently, the first stochastic process to model the interplay of predator and prey" +"---\nabstract: 'We propose social welfare optimization as a general paradigm for formalizing fairness in AI systems. We argue that optimization models allow formulation of a wide range of fairness criteria as social welfare functions, while enabling AI to take advantage of highly advanced solution technology. Rather than attempting to reduce bias between selected groups, one can achieve equity across all groups by incorporating fairness into the social welfare function. This also allows a fuller accounting of the welfare of the individuals involved. We show how to integrate social welfare optimization with both rule-based AI and machine learning, using either an in-processing or a post-processing approach. We present empirical results from a case study as a preliminary examination of the validity and potential of these integration strategies.'\nauthor:\n- 'Violet (Xinying) Chen [^1], J. N. Hooker [^2]'\nbibliography:\n- 'ref.bib'\ndate: July 2022\ntitle: Fairness through Social Welfare Optimization\n---\n\nIntroduction\n============\n\nArtificial intelligence is increasingly used not only to solve problems, but to recommend action decisions that range from awarding mortgage loans to granting parole. The prospect of making decisions immediately raises the question of ethics and fairness. If ethical norms are to be incorporated into artificial decision making," +"---\nabstract: 'This paper considers diffeomorphism invariant theories of gravity coupled to matter, with second order equations of motion. This includes Einstein-Maxwell and Einstein-scalar field theory with (after field redefinitions) the most general parity-symmetric four-derivative effective field theory corrections. A gauge-invariant approach is used to study the characteristics associated to the physical degrees of freedom in an arbitrary background solution. The symmetries of the principal symbol arising from diffeomorphism invariance and the action principle are determined. For gravity coupled to a single scalar field (i.e. a Horndeski theory) it is shown that causality is governed by a characteristic polynomial of degree $6$ which factorises into a product of quadratic and quartic polynomials. The former is defined in terms of an \u201ceffective metric\" and is associated with a \u201cpurely gravitational\" polarisation, whereas the latter generically involves a mixture of gravitational and scalar field polarisations. The \u201cfastest\" degrees of freedom are associated with the quartic polynomial, which defines a surface analogous to the Fresnel surface in crystal optics. In contrast with optics, this surface is generically non-singular except on certain surfaces in spacetime. It is shown that a Killing horizon is an example of such a surface. It is also shown that" +"---\nabstract: 'Surrender poses one of the major risks to life insurance and a sound modeling of its true probability has direct implication on the risk capital demanded by the Solvency II directive. We add to the existing literature by performing extensive experiments that present highly practical results for various modeling approaches, including XGBoost, random forest, GLM and neural networks. Further, we detect shortcomings of prevalent model assessments, which are in essence based on a confusion matrix. Our results indicate that accurate label predictions and a sound modeling of the true probability can be opposing objectives. We illustrate this with the example of resampling. While resampling is capable of improving label prediction in rare event settings, such as surrender, and thus is commonly applied, we show theoretically and numerically that models trained on resampled data predict significantly biased event probabilities. Following a probabilistic perspective on surrender, we further propose time-dependent confidence bands on predicted mean surrender rates as a complementary assessment and demonstrate its benefit. This evaluation takes a very practical, going concern perspective, which respects that the composition of a portfolio, as well as the nature of underlying risk drivers might change over time.'\nauthor:\n- |\n Mark Kiermayer\\" +"---\nabstract: 'We give a description of the intrinsic geometry of elastic distortions in three-dimensional nematic liquid crystals and establish necessary and sufficient conditions for a set of functions to represent these distortions by describing how they couple to the curvature tensor. We demonstrate that, in contrast to the situation in two dimensions, the first-order gradients of the director alone are not sufficient for full reconstruction of the director field from its intrinsic geometry: it is necessary to provide additional information about the second-order director gradients. We describe several different methods by which the director field may be reconstructed from its intrinsic geometry. Finally, we discuss the coupling between individual distortions and curvature from the perspective of Lie algebras and groups and describe homogeneous spaces on which pure modes of distortion can be realised.'\nauthor:\n- Joseph Pollard\n- 'Gareth P. Alexander'\ntitle: 'Intrinsic Geometry and Director Reconstruction for Three-Dimensional Liquid Crystals'\n---\n\nThe geometric characterisation of liquid crystals textures has been fundamental to their understanding. A classic example of the insights of geometric methodology is the description of the cholesteric blue phases as the result of the geometric frustration of trying to realise in flat space the perfect double" +"---\nabstract: 'We show that in pool-based active classification without assumptions on the underlying distribution, if the learner is given the power to abstain from some predictions by paying the price marginally smaller than the average loss $1/2$ of a random guess, exponential savings in the number of label requests are possible whenever they are possible in the corresponding realizable problem. We extend this result to provide a necessary and sufficient condition for exponential savings in pool-based active classification under the model misspecification.'\nauthor:\n- |\n \\\n HSE University and Institute for Information Transmission Problems RAS, Moscow\\\n \\\n ETH, Z\u00fcrich\nbibliography:\n- 'mybib.bib'\ntitle: Exponential Savings in Agnostic Active Learning through Abstention\n---\n\nactive learning, sample complexity, abstention, reject option, Chow\u2019s risk, VC dimension, model selection aggregation, Massart\u2019s noise\n\nIntroduction {#sec:introduction}\n============\n\nPool-based *active classification* can be seen as an extension of the classical PAC classification setup, where instead of learning from the labeled sample $(X_1, Y_1), \\ldots, \n(X_n, Y_n)$, one can adaptively request the labels from a large pool $X_{1}, X_2, \n\\ldots$ of i.i.d. unlabeled instances round by round. Our hope is to request significantly fewer labels $Y_i$ and get the same statistical guarantees as in *passive learning*. A" +"---\naddress: 'Kagawa University, Faculty of education, Mathematics, Saiwaicho $1$-$1$, Takamatsu, Kagawa, $760$-$8522$, Japan'\nauthor:\n- Naoto Yotsutani\ntitle: 'Diffeomorphism classes of the doubling Calabi-Yau threefolds with Picard number two'\n---\n\n[**Abstract.**]{} Previously we constructed Calabi-Yau threefolds by a differential-geometric gluing method using Fano threefolds with their smooth anticanonical $K3$ divisors [@DY14]. In this paper, we further consider the diffeomorphism classes of the resulting Calabi-Yau threefolds (which are called the *doubling Calabi-Yau threefolds*) starting from different pairs of Fano threefolds with Picard number one. Using the classifications of simply-connected $6$-manifolds in differential topology and the *$\\lambda$-invariant* introduced by Lee [@Lee20], we prove that any two of the doubling Calabi-Yau threefolds with Picard number two are not diffeomorphic to each other when the underlying Fano threefolds are distinct families.\n\nIntroduction {#sec:Intro}\n============\n\nThe purpose of this paper is to consider the diffeomorphism classes of Calabi-Yau $3$-folds with Picard number two constructed in our differential-geometrical gluing method. In [@DY14], Doi and the author gave a differential-geometric construction (the *doubling construction*) of Calabi-Yau $3$-folds starting from Fano $3$-folds with their smooth anticanonical $K3$ divisors (see Theorems $\\ref{thm:Kov}$ and $\\ref{thm:DY}$ for more details). Throughout this paper, we call the resulting Calabi-Yau $3$-folds obtained by" +"---\nabstract: 'Information Causality is a physical principle which states that the amount of randomly accessible data over a classical communication channel cannot exceed its capacity, even if the sender and the receiver have access to a source of nonlocal correlations. This principle can be used to bound the nonlocality of quantum mechanics without resorting to its full formalism, with a notable example of reproducing the Tsirelson\u2019s bound of the Clauser-Horne-Shimony-Holt inequality. Despite being promising, the latter result found little generalization to other Bell inequalities because of the limitations imposed by the process of concatenation, in which several nonsignaling resources are put together to produce tighter bounds. In this work, we show that concatenation can be successfully replaced by limits on the communication channel capacity. It allows us to re-derive and, in some cases, significantly improve all the previously known results in a simpler manner and apply the Information Causality principle to previously unapproachable Bell scenarios.'\nauthor:\n- Nikolai Miklin\n- 'Marcin Paw[\u0142]{}owski'\nbibliography:\n- 'ic.bib'\ntitle: Information Causality without concatenation\n---\n\n[^1]\n\nIntroduction\n============\n\nInformation Causality (IC) is a physical principle proposed to bound nonlocality of correlations without resorting to the full formalism of quantum mechanics\u00a0[@IC; @ICrev]. Instead," +"---\nabstract: 'Given a graph, the shortest-path problem requires finding a sequence of edges with minimum cumulative length that connects a source vertex to a target vertex. We consider a variant of this classical problem in which the position of each vertex in the graph is a continuous decision variable constrained in a convex set, and the length of an edge is a convex function of the position of its endpoints. Problems of this form arise naturally in many areas, from motion planning of autonomous vehicles to optimal control of hybrid systems. The price for such a wide applicability is the complexity of this problem, which is easily seen to be NP-hard. Our main contribution is a strong and lightweight mixed-integer convex formulation based on perspective operators, that makes it possible to efficiently find globally optimal paths in large graphs and in high-dimensional spaces.'\nauthor:\n- 'Tobia Marcucci[^1]'\n- Jack Umenberger\n- 'Pablo A. Parrilo'\n- Russ Tedrake\nbibliography:\n- 'references.bib'\ntitle: ' Shortest Paths in Graphs of Convex Sets[^2] '\n---\n\nShortest-path problem, graph problems with neighborhoods, mixed-integer convex programming, perspective formulation, optimal control.\n\n52B05, 90C11, 90C25, 90C35, 90C57, 93C55, 93C83.\n\nIntroduction {#sec:intro}\n============\n\n![ Example of an SPP" +"---\nabstract: 'Given a graph $G$ and an integer $k$, it is an $NP$-complete problem to decide whether $G$ has a dominating set of size at most $k$. In this paper we study this problem for the Kn[\u00f6]{}del Graph on $n$ vertices using elementary number theory techniques. In particular, we show an explicit upper bound for the domination number of the Kn[\u00f6]{}del Graph on $n$ vertices any time that we can find a prime number $p$ dividing $n$ for which $2$ is a primitive root.'\nauthor:\n- Jesse Racicot\n- 'Giovanni Rosso[^1]'\nbibliography:\n- 'sample-dmtcs.bib'\nnocite: '[@*]'\ntitle: 'Domination in Kn[\u00f6]{}del graphs'\n---\n\nIntroduction\n============\n\nGiven a graph $G = (V, E)$, a subset $D \\subseteq V$ is said to be a *dominating set* if every vertex in $V$ is in $D$ or adjacent to some vertex in $D$. A dominating set of minimum size is called a $\\gamma$-set and the size of any $\\gamma$-set is denoted by $\\gamma(G)$. The problem of finding a minimum dominating set is a computationally difficult optimization problem. In particular, given a graph $G$ and an integer $k$, determining whether $\\gamma(G) \\leq k$ is $NP$-complete [@NPComplete].\n\nThe Kn[\u00f6]{}del graph was implicitly defined in [@KnodelGossip]. Therein," +"---\nabstract: 'Support Vector Machines (SVMs) are one of the most popular supervised learning models to classify using a hyperplane in an Euclidean space. Similar to SVMs, tropical SVMs classify data points using a tropical hyperplane under the tropical metric with the max-plus algebra. In this paper, first we show generalization error bounds of tropical SVMs over the tropical projective torus. While the generalization error bounds attained via [Vapnik-Chervonenkis (]{}VC[)]{} dimensions in a distribution-free manner still depend on the dimension, we also show [numerically and]{} theoretically by extreme value statistics that the tropical SVMs for classifying data points from two Gaussian distributions as well as empirical data sets of different neuron types are fairly robust against the curse of dimensionality. Extreme value statistics also underlie the anomalous scaling behaviors of the tropical distance between random vectors with additional noise dimensions. Finally, we define tropical SVMs over a function space with the tropical metric.'\nauthor:\n- Ruriko Yoshida\n- Misaki Takamori\n- Hideyuki Matsumoto\n- Keiji Miura\nbibliography:\n- 'document.bib'\ntitle: 'Tropical Support Vector Machines: Evaluations and Extension to Function Spaces'\n---\n\n\\[orcid=0000-0002-9258-6541\\]\n\nWe obtained generalization error bounds of tropical Support Vector Machines (SVMs) via [the]{} Vapnik-Chervonenkis dimensions [ of tropical" +"---\nabstract: 'Exploring controllable interactions lies at the heart of quantum science. Neutral Rydberg atoms provide a versatile route toward flexible interactions between single quanta. Previous efforts mainly focused on the excitation annihilation\u00a0(EA) effect of the Rydberg blockade due to its robustness against interaction fluctuation. We study another effect of the Rydberg blockade, namely, the transition slow-down\u00a0(TSD). In TSD, a ground-Rydberg cycling in one atom slows down a Rydberg-involved state transition of a nearby atom, which is in contrast to EA that annihilates a presumed state transition. TSD can lead to an accurate controlled-[NOT]{}\u00a0([CNOT]{}) gate with a sub-$\\mu$s duration about $2\\pi/\\Omega+\\epsilon$ by two pulses, where $\\epsilon$ is a negligible transient time to implement a phase change in the pulse and $\\Omega$ is the Rydberg Rabi frequency. The speedy and accurate TSD-based [CNOT]{} makes neutral atoms comparable\u00a0(superior) to superconducting\u00a0(ion-trap) systems.'\nauthor:\n- 'Xiao-Feng Shi'\ntitle: 'Transition slow-down by Rydberg interaction of neutral atoms and a fast controlled-[NOT]{} quantum gate'\n---\n\nintroduction\n============\n\nThere are exciting advances in Rydberg atom quantum science recently\u00a0[@PhysRevLett.85.2208; @Lukin2001; @Saffman2010; @Saffman2016; @Weiss2017; @Firstenberg2016; @Adams2020; @Browaeys2020] because of the feasibility to coherently and rapidly switch on and off the strong dipole-dipole interaction." +"---\nabstract: |\n [This paper focuses on the bootstrap for network dependent processes under the conditional $\\psi$-weak dependence. Such processes are distinct from other forms of random fields studied in the statistics and econometrics literature so that the existing bootstrap methods cannot be applied directly. We propose a block-based approach and a modification of the dependent wild bootstrap for constructing confidence sets for the mean of a network dependent process. In addition, we establish the consistency of these methods for the smooth function model and provide the bootstrap alternatives to the network heteroskedasticity-autocorrelation consistent (HAC) variance estimator. We find that the modified dependent wild bootstrap and the corresponding variance estimator are consistent under weaker conditions relative to the block-based method, which makes the former approach preferable for practical implementation. ]{}\n\n [Keywords. Conditional bootstrap; Block bootstrap; Dependent wild bootstrap; Network dependent process; Random field; Conditional $\\psi$-weak dependence.]{}\nbibliography:\n- 'network\\_bootstrap.bib'\n---\n\nThe Bootstrap for Network Dependent Processes\n\nDenis Kojevnikov\\\n*Tilburg University*\n\nIntroduction\n============\n\nThe aim of this paper is developing bootstrap approaches for the sample mean of network dependent processes studied in @Kojevnikov/Marmer/Song:20 [hereafter KMS]. A network dependent process is a random field indexed by the set of" +"---\nabstract: 'We introduce the matrix-based R[\u00e9]{}nyi\u2019s $\\alpha$-order entropy functional to parameterize Tishby *et al.* information bottleneck (IB) principle\u00a0[@tishby99information] with a neural network. We term our methodology Deep Deterministic Information Bottleneck (DIB), as it avoids variational inference and distribution assumption. We show that deep neural networks trained with DIB outperform the variational objective counterpart and those that are trained with other forms of regularization, in terms of generalization performance and robustness to adversarial attack. Code available at\u00a0.'\naddress: |\n $^1$Computational NeuroEngineering Laboratory, University of Florida, Gainesville, FL 32611, USA\\\n $^2$NEC Laboratories Europe, 69115 Heidelberg, Germany\nbibliography:\n- 'strings.bib'\n- 'refs.bib'\ntitle: |\n Deep Deterministic Information Bottleneck\\\n with Matrix-based Entropy Functional\n---\n\nInformation bottleneck, representation learning, matrix-based R[\u00e9]{}nyi\u2019s $\\alpha$-order entropy functional\n\nIntroduction {#sec:intro}\n============\n\nThe information bottleneck (IB) principle was introduced by Tishby *et al.*\u00a0[@tishby99information] as an information-theoretic framework for learning. It considers extracting information about a target signal $Y$ through a correlated observable $X$. The extracted information is quantified by a variable $T$, which is (a possibly randomized) function of $X$, thus forming the Markov chain $Y \\leftrightarrow X \\leftrightarrow T$. Suppose we know the joint distribution $p(X,Y)$, the objective is to learn a representation $T$ that" +"---\nabstract: 'Network epidemiology has become a vital tool in understanding the effects of high-degree vertices, geographic and demographic communities, and other inhomogeneities in social structure on the spread of disease. However, many networks derived from modern datasets are quite dense, such as mobility networks where each location has links to a large number of potential destinations. One way to reduce the computational effort of simulating epidemics on these networks is sparsification, where we select a representative subset of edges based on some measure of their importance. Recently an approach was proposed using an algorithm based on the effective resistance of the edges. We explore how effective resistance is correlated with the probability that an edge transmits disease in the SI model. We find that in some cases these two notions of edge importance are well correlated, making effective resistance a computationally efficient proxy for the importance of an edge to epidemic spread. In other cases, the correlation is weaker, and we discuss situations in which effective resistance is not a good proxy for epidemic importance.'\nauthor:\n- 'Alexander Mercier[$^{1,2}$]{}'\nbibliography:\n- 'CPNS\\_Paper.bib'\ntitle: 'Contagion-Preserving Network Sparsifiers: Exploring Epidemic Edge Importance Utilizing Effective Resistance'\n---\n\nIntroduction\n============\n\nMotivation\n----------\n\nNetworks" +"---\nabstract: 'Active fluids exhibit complex turbulent-like flows at low Reynolds number. Recent work predicted that 2d active nematic turbulence follows universal scaling laws. However, experimentally testing these predictions is conditioned by the coupling to the 3d environment. Here, we measure the spectrum of the kinetic energy, $E(q)$, in an active nematic film in contact with a passive oil layer. At small and intermediate scales, we find the scaling regimes $E(q)\\sim q^{-4}$ and $E(q)\\sim q^{-1}$, respectively, in agreement with the theoretical prediction for 2d active nematics. At large scales, however, we find a new scaling $E(q)\\sim q$, which emerges when the dissipation is dominated by the 3d oil layer. In addition, we derive an explicit expression for the spectrum that spans all length scales, thus explaining and connecting the different scaling regimes. This allows us to fit the data and extract the length scale that controls the crossover to the new large-scale regime, which we tune by varying the oil viscosity. Overall, our work experimentally demonstrates the emergence of universal scaling laws in active turbulence, and it establishes how the spectrum is affected by external dissipation.'\nauthor:\n- 'Berta Mart\u00ednez-Prat'\n- Ricard Alert\n- Fanlong Meng\n- 'Jordi Ign\u00e9s-Mullol'\n-" +"---\nabstract: '3D object detection is receiving increasing attention from both industry and academia thanks to its wide applications in various fields. In this paper, we propose Point-Voxel Region-based Convolution Neural Networks (PV-RCNNs) for 3D object detection on point clouds. First, we propose a novel 3D detector, PV-RCNN, which boosts the 3D detection performance by deeply integrating the feature learning of both point-based set abstraction and voxel-based sparse convolution through two novel steps, *i.e.*, the voxel-to-keypoint scene encoding and the keypoint-to-grid RoI feature abstraction. Second, we propose an advanced framework, PV-RCNN++, for more efficient and accurate 3D object detection. It consists of two major improvements: sectorized proposal-centric sampling for efficiently producing more representative keypoints, and VectorPool aggregation for better aggregating local point features with much less resource consumption. With these two strategies, our PV-RCNN++ is about $3\\times$ faster than PV-RCNN, while also achieving better performance. The experiments demonstrate that our proposed PV-RCNN++ framework achieves state-of-the-art 3D detection performance on the large-scale and highly-competitive Waymo Open Dataset with 10 FPS inference speed on the detection range of $150m \\times 150m$.'\nauthor:\n- Shaoshuai Shi\n- Li Jiang\n- Jiajun Deng\n- Zhe Wang\n- Chaoxu Guo\n- Jianping Shi\n- Xiaogang" +"---\nabstract: 'In this paper, we investigate a new variant of neural architecture search (NAS) paradigm \u2013 searching with random labels (RLNAS). The task sounds counter-intuitive for most existing NAS algorithms since random label provides few information on the performance of each candidate architecture. Instead, we propose a novel NAS framework based on ease-of-convergence hypothesis, which requires only random labels during searching. The algorithm involves two steps: first, we train a SuperNet using random labels; second, from the SuperNet we extract the sub-network whose weights change most significantly during the training. Extensive experiments are evaluated on multiple datasets (e.g. NAS-Bench-201 and ImageNet) and multiple search spaces (e.g. DARTS-like and MobileNet-like). Very surprisingly, RLNAS achieves comparable or even better results compared with state-of-the-art NAS methods such as PC-DARTS, Single Path One-Shot, even though the counterparts utilize full ground truth labels for searching. We hope our finding could inspire new understandings on the essential of NAS. Code is available at .'\nauthor:\n- |\n Xuanyang Zhang Pengfei Hou Xiangyu Zhang Jian Sun\\\n MEGVII Technology\\\n [{zhangxuanyang,houpengfei,zhangxiangyu,sunjian}@megvii.com]{}\nbibliography:\n- 'reference.bib'\ntitle: Neural Architecture Search with Random Labels\n---\n\nIntroduction\n============\n\nRecent years *Neural Architecture Search*\u00a0[@zoph2016neural; @baker2016designing; @zoph2018learning; @zhong2018practical; @zhong2018blockqnn; @liu2018progressive; @real2019regularized; @tan2019mnasnet; @chen2019detnas]" +"---\nabstract: 'Stochastic mechanics is regarded as a physical theory to explain quantum mechanics with classical terms such that some of the quantum mechanics paradoxes can be avoided. Here we propose a new variational principle to uncover more insights on stochastic mechanics. According to this principle, information measures, such as relative entropy and Fisher information, are imposed as constraints on top of the least action principle. This principle not only recovers Nelson\u2019s theory and consequently, the Schr\u00f6dinger equation, but also clears an unresolved issue in stochastic mechanics on why multiple Lagrangians can be used in the variational method and yield the same theory. The concept of forward and backward paths provides an intuitive physical picture for stochastic mechanics. Each path configuration is considered as a degree of freedom and has its own law of dynamics. Thus, the variation principle proposed here can be a new tool to derive more advanced stochastic theory by including additional degrees of freedom in the theory. The structure of Lagrangian developed here shows that some terms in the Lagrangian are originated from information constraints. This suggests a Lagrangian may need to include both physical and informational terms in order to have a complete description of" +"---\nabstract: 'We consider the minimal seesaw model, the Standard Model extended by two right-handed neutrinos, for explaining the neutrino masses and mixing angles measured in oscillation experiments. When one of right-handed neutrinos is lighter than the electroweak scale, it can give a sizable contribution to neutrinoless double beta ($0\\nu \\beta \\beta$) decay. We show that the detection of the $0 \\nu \\beta \\beta$ decay by future experiments gives a significant implication to the search for such light right-handed neutrino.'\nauthor:\n- Takehiko Asaka\n- Hiroyuki Ishida\n- Kazuki Tanaka\ntitle: ' Neutrinoless double beta decays tell nature of right-handed neutrinos '\n---\n\n[^1]\n\n[^2]\n\n[^3]\n\nThe Standard Model (SM) of the particle physics preserve two accidental global symmetries in the (classical) Lagrangian, namely the baryon and lepton number symmetries. It is well known that these global symmetries are non-perturbatively broken at the quantum level\u00a0[@tHooft:1976rip; @tHooft:1976snw], especially at high temperature of the universe\u00a0[@Dimopoulos:1978kv; @Manton:1983nd; @Klinkhamer:1984di; @Kuzmin:1985mm]. Even at the quantum level, however, a baryon minus lepton symmetry, often called $U(1)_{\\rm B \\mathchar`- L}$\u00a0[^4], has to be preserved in the SM.\n\nThe simplest way to break the $U(1)_{\\rm B \\mathchar`- L}$ symmetry without loss of the renormalizability is" +"---\nauthor:\n- |\n Yaqiong Wang\\\n Peking University Francesco Finazzi\\\n University of Bergamo Alessandro Fass\u00f2\\\n University of Bergamo\nbibliography:\n- 'JSS\\_2020\\_R2.bib'\ntitle: ' v2: A Software for Modelling Functional Spatio-Temporal Data'\n---\n\nIntroduction {#sec:intro}\n============\n\nWith the increase of multidimensional data availability and modern computing power, statistical models for spatial and spatio-temporal data are developing at a rapid pace. Hence, there is a need for stable and reliable, yet updated and efficient, software packages. In this section, we briefly discuss multidimensional data in climate and environmental studies as well as statistical software for space-time data.\n\nMultidimensional data {#sec:multidata}\n---------------------\n\nLarge multidimensional data sets often arise when climate and environmental phenomena are observed at the global scale over extended periods. In climate studies, relevant physical variables are observed on a three-dimensional (3D) spherical shell (the atmosphere) while time is the fourth dimension. For instance, measurements are obtained by radiosondes flying from ground level up to the stratosphere [@fasso2014statistical], by interferometric sensors aboard satellites [@finazzi2018statistical] or by laser-based methods, such as Light Detection and Ranging (LIDAR) [@negri2018modeling]. In this context, statistical modelling of multidimensional data requires describing and exploiting the spatio-temporal correlation of the underlying phenomenon or data-generating process. This is done" +"---\nabstract: 'In this article, we study approximation properties of the variation spaces corresponding to shallow neural networks with a variety of activation functions. We introduce two main tools for estimating the metric entropy, approximation rates, and $n$-widths of these spaces. First, we introduce the notion of a smoothly parameterized dictionary and give upper bounds on the non-linear approximation rates, metric entropy and $n$-widths of their absolute convex hull. The upper bounds depend upon the order of smoothness of the parameterization. This result is applied to dictionaries of ridge functions corresponding to shallow neural networks, and they improve upon existing results in many cases. Next, we provide a method for lower bounding the metric entropy and $n$-widths of variation spaces which contain certain classes of ridge functions. This result gives sharp lower bounds on the $L^2$-approximation rates, metric entropy, and $n$-widths for variation spaces corresponding to neural networks with a range of important activation functions, including ReLU$^k$ activation functions and sigmoidal activation functions with bounded variation.'\nauthor:\n- |\n Jonathan W. Siegel\\\n Department of Mathematics\\\n Pennsylvania State University\\\n University Park, PA 16802\\\n `jus1949@psu.edu`\\\n Jinchao Xu\\\n Department of Mathematics\\\n Pennsylvania State University\\\n University Park, PA 16802\\\n `jxx1@psu.edu`\\\nbibliography:\n- 'refs.bib'\ntitle:" +"---\nabstract: 'In recent years, supervised person re-identification (re-ID) models have received increasing studies. However, these models trained on the source domain always suffer dramatic performance drop when tested on an unseen domain. Existing methods are primary to use pseudo labels to alleviate this problem. One of the most successful approaches predicts neighbors of each unlabeled image and then uses them to train the model. Although the predicted neighbors are credible, they always miss some hard positive samples, which may hinder the model from discovering important discriminative information of the unlabeled domain. In this paper, to complement these low recall neighbor pseudo labels, we propose a joint learning framework to learn better feature embeddings via high precision neighbor pseudo labels and high recall group pseudo labels. The group pseudo labels are generated by transitively merging neighbors of different samples into a group to achieve higher recall. However, the merging operation may cause subgroups in the group due to imperfect neighbor predictions. To utilize these group pseudo labels properly, we propose using a similarity-aggregating loss to mitigate the influence of these subgroups by pulling the input sample towards the most similar embeddings. Extensive experiments on three large-scale datasets demonstrate that our" +"---\nauthor:\n- 'E. T. Mannila'\n- 'P. Samuelsson'\n- 'S. Simbierowicz'\n- 'J. T. Peltonen'\n- 'V. Vesterinen'\n- 'L. Gr[\u00f6]{}nberg'\n- 'J. Hassel'\n- 'V. F. Maisi'\n- 'J. P. Pekola'\ntitle: A superconductor free of quasiparticles for seconds\n---\n\n**Superconducting devices, based on the Cooper pairing of electrons, play an important role in existing and emergent technologies, ranging from radiation detectors [@day2003broadband; @echternach2018single] to quantum computers [@kjaergaard2020superconducting]. Their performance is limited by spurious quasiparticle excitations formed from broken Cooper pairs [@aumentado2004nonequilibrium; @shaw2008kinetics; @devisser2011number; @martinis2009energy; @catelani2011quasiparticle; @pop2014coherent; @wang2014measurement; @patel2017phononmediated; @gustavsson2016suppressing]. Efforts to achieve ultra-low quasiparticle densities have reached time-averaged numbers of excitations on the order of one in state-of-the-art devices [@ferguson2008quasiparticle; @higginbotham2015parity; @vool2014nonpoissonian; @gustavsson2016suppressing; @echternach2018single]. However, the dynamics of the quasiparticle population as well as the time scales for adding and removing individual excitations remain largely unexplored. Here, we experimentally demonstrate a superconductor completely free of quasiparticles for periods lasting up to seconds. We monitor the quasiparticle number on a mesoscopic superconductor in real time by measuring the charge tunneling to a normal metal contact. Quiet, excitation-free periods are interrupted by random-in-time Cooper pair breaking events, followed by a burst of charge tunneling within a millisecond. Our results" +"---\nabstract: 'We present BoTNet, a conceptually simple yet powerful backbone architecture that incorporates self-attention for multiple computer vision tasks including image classification, object detection and instance segmentation. By just replacing the spatial convolutions with global self-attention in the final three bottleneck blocks of a ResNet and no other changes, our approach improves upon the baselines significantly on instance segmentation and object detection while also reducing the parameters, with minimal overhead in latency. Through the design of BoTNet, we also point out how ResNet bottleneck blocks with self-attention can be viewed as Transformer blocks. Without any bells and whistles, BoTNet achieves [**44.4**]{}% Mask AP and [**49.7**]{}% Box AP on the COCO Instance Segmentation benchmark using the Mask R-CNN framework; surpassing the previous best published single model and single scale results of ResNeSt\u00a0[@zhang2020resnest] evaluated on the COCO validation set. Finally, we present a simple adaptation of the BoTNet design for image classification, resulting in models that achieve a strong performance of [**84.7**]{}% top-1 accuracy on the ImageNet benchmark while being up to [**1.64x**]{} faster in \u201ccompute\u201d[^1] time than the popular EfficientNet models on TPU-v3 hardware. We hope our simple and effective approach will serve as a strong baseline for future" +"---\nabstract: 'Rotating machines like engines, pumps, or turbines are ubiquitous in modern day societies. Their mechanical parts such as electrical engines, rotors, or bearings are the major components and any failure in them may result in their total shutdown. Anomaly detection in such critical systems is very important to monitor the system\u2019s health. As the requirement to obtain a dataset from rotating machines where all possible faults are explicitly labeled is difficult to satisfy, we propose a method that focuses on the normal behavior of the machine instead. We propose an autoencoder model-based method for condition monitoring of rotating machines by using an anomaly detection approach. The method learns the characteristics of a rotating machine using the normal vibration signals to model the healthy state of the machine. A threshold-based approach is then applied to the reconstruction error of unseen data, thus enabling the detection of unseen anomalies. The proposed method can directly extract the salient features from raw vibration signals and eliminate the need for manually engineered features. We demonstrate the effectiveness of the proposed method by employing two rotating machine datasets and the quality of the automatically learned features is compared with a set of handcrafted features" +"---\nabstract: 'We consider Markov Decision Processes (MDPs) in which every stationary policy induces the same graph structure for the underlying Markov chain and further, the graph has the following property: if we replace each recurrent class by a node, then the resulting graph is acyclic. For such MDPs, we prove the convergence of the stochastic dynamics associated with a version of optimistic policy iteration (OPI), suggested in [@tsitsiklis2002convergence], in which the values associated with all the nodes visited during each iteration of the OPI are updated.'\nauthor:\n- |\n Joseph Lubars\\\n University of Illinois at Urbana-Champaign\\\n `lubars2@illinois.edu`\\\n Anna Winnicki\\\n University of Illinois at Urbana-Champaign\\\n `annaw5@illinois.edu`\\\n Michael Livesay\\\n Sandia National Laboratories\\\n `mlivesa2@illinois.edu` R. Srikant\\\n University of Illinois at Urbana-Champaign\\\n `rsrikant@illinois.edu`\\\nbibliography:\n- 'refs.bib'\ntitle: Optimistic Policy Iteration for MDPs with Acyclic Transient State Structure\n---\n\nIntroduction\n============\n\nPolicy iteration is a key computational tool used in the study of Markov Decision Processes (MDPs) and Reinforcement Learning (RL) problems. In traditional policy iteration for MDPs, at each iteration, the value function associated with a policy is computed exactly and a new policy is chosen greedily with respect to this value function [@bertsekasvolI; @bersekasvolII; @bertsekastsitsiklis; @suttonbarto]. It can be shown that using" +"---\nabstract: 'The hierarchy of the coupling strengths in a physical system often engenders an effective model at low energies where the decoupled high-energy modes are integrated out. Here, using neutron scattering, we show that the spin excitations in the breathing pyrochlore lattice compound CuInCr$_4$S$_8$ are hierarchical and can be approximated by an effective model of correlated tetrahedra at low energies. At higher energies, intra-tetrahedron excitations together with strong magnon-phonon couplings are observed, which suggests the possible role of the lattice degree of freedom in stabilizing the spin tetrahedra. Our work illustrates the spin dynamics in CuInCr$_4$S$_8$ and demonstrates a general effective-cluster approach to understand the dynamics on the breathing-type lattices.'\nauthor:\n- Shang Gao\n- 'Andrew F. May'\n- 'Mao-Hua Du'\n- 'Joseph A. M. Paddison'\n- Hasitha Suriya Arachchige\n- Ganesh Pokharel\n- Clarina dela Cruz\n- Qiang Zhang\n- Georg Ehlers\n- 'David S. Parker'\n- 'David G. Mandrus'\n- 'Matthew B. Stone'\n- 'Andrew D. Christianson'\ntitle: Hierarchical excitations from correlated spin tetrahedra on the breathing pyrochlore lattice\n---\n\n[^1]\n\nFor the description of a physical system, selecting an appropriate energy scale is always the first step as it determines what (quasi-)particles and interactions might play" +"---\nabstract: 'This brief introduction to Model Predictive Control specifically addresses stochastic Model Predictive Control, where probabilistic constraints are considered. A simple linear system subject to uncertainty serves as an example. The Matlab code for this stochastic Model Predictive Control example is available online.'\nauthor:\n- |\n Tim Br\u00fcdigam\\\n Technical University of Munich, 80333 Munich, Germany\\\n tim.bruedigam@tum.de\nbibliography:\n- 'Dissertation\\_bib.bib'\ndate: '19.04.2021'\ntitle: '(Stochastic) Model Predictive Control - a Simulation Example'\n---\n\nIntroduction\n============\n\nIn the following, we provide details on an (S)MPC simulation example. The Matlab code for this simulation example is available at . The main purpose of this document is to introduce the idea of considering uncertainty in constraints within . This document is not necessarily a step-by-step manual, nor does it explain code in detail.\n\nIn Section\u00a0\\[sec:simu\\], we first introduce the deterministic system, constraints, and MPC optimal control problem. We then introduce uncertainty into the system dynamics and provide a brief overview of how to handle constraints subject to uncertainty, also called chance constraints.\n\nSection\u00a0\\[sec:SMPC\\] provides a more elaborate derivation of the chance constraint reformulation, both for normally distributed uncertainties and general probability distributions.\n\nSimulation Example {#sec:simu}\n==================\n\nWe consider the system example described" +"---\nabstract: 'In [@ketterer2014failure] Ketterer and Rajala showed an example of metric measure space, satisfying the measure contraction property ${\\mathsf{MCP}}(0,3)$, that has different topological dimensions at different regions of the space. In this article I propose a refinement of that example, which satisfies the ${\\mathsf{CD}}(0,\\infty)$ condition, proving the non-constancy of topological dimension for CD spaces. This example also shows that the weak curvature dimension bound, in the sense of Lott-Sturm-Villani, is not sufficient to deduce any reasonable non-branching condition. Moreover, it allows to answer to some open question proposed by Schultz in [@schultz2017existence], about strict curvature dimension bounds and their stability with respect to the measured Gromov Hausdorff convergence.'\nauthor:\n- Mattia Magnabosco\nbibliography:\n- 'example.bib'\ntitle: '**Example of an Highly Branching CD Space**'\n---\n\nIn their remarkable works Lott, Villani [@lottvillani] and Sturm [@sturm2006; @sturm2006ii] introduced a weak notion of curvature dimension bounds, which strongly relies on the theory of Optimal Transport. Inspired by some results that hold in the Riemannian case, they defined a consistent notion of curvature dimension bound for metric measure spaces, that is known as CD condition. The metric measure spaces satisfying the CD condition are called CD spaces and enjoy some remarkable analytic and" +"---\nabstract: |\n In this paper, we study a new micro-macro model for a reactive polymeric fluid, which is derived recently in \\[Y. Wang, T.-F. Zhang, and C. Liu, *J. Non-Newton. Fluid Mech.* 293 (2021), 104559, 13 pp\\], by using the energetic variational approach. The model couples the breaking/reforming reaction scheme of the microscopic polymers with other mechanical effects in usual viscoelastic complex fluids. We establish the global existence of classical solutions near the global equilibrium, in which the treatment on the chemo-mechanical coupling effect is the most crucial part. In particular, a weighted Poincar\u00e9 inequality with a mean value is employed to overcome the difficulty that arises from the non-conservative number density distribution of each species.\n\n *Keywords*: Global existence; Viscoelastic fluids; Energetic variational approach; A priori estimate; Weighted Poincar\u00e9 inequality. *2020 Mathematics Subject Classification*: 35A01, 35A15, 76A10, 76M30, 82D60\naddress:\n- 'Department of Applied Mathematics, Illinois Institute of Technology, Chicago, IL 60616, USA'\n- 'Department of Applied Mathematics, Illinois Institute of Technology, Chicago, IL 60616, USA'\n- 'School of Mathematics and Physics, China University of Geosciences, Wuhan, 430074, P. R. China '\nauthor:\n- Chun Liu\n- Yiwei Wang\n- 'Teng-Fei Zhang$^\\dag$'\nbibliography:\n- 'wlm-refs.bib'\ntitle: Global Existence of" +"---\nabstract: 'We estimate the black hole spin parameter in GRS 1915+105 using the continuum-fitting method with revised mass and inclination constraints based on the very long baseline interferometric parallax measurement of the distance to this source. We fit Rossi X-ray Timing Explorer observations selected to be accretion disk-dominated spectral states as described in McClinotck et al. (2006) and Middleton et al. (2006), which previously gave discrepant spin estimates with this method. We find that, using the new system parameters, the spin in both datasets increased, providing a best-fit spin of $a_*=0.86$ for the Middleton et al. data and a poor fit for the McClintock et al. dataset, which becomes pegged at the BHSPEC model limit of $a_*=0.99$. We explore the impact of the uncertainties in the system parameters, showing that the best-fit spin ranges from $a_*= 0.4$ to 0.99 for the Middleton et al. dataset and allows reasonable fits to the McClintock et al. dataset with near maximal spin for system distances greater than $\\sim 10$ kpc. We discuss the uncertainties and implications of these estimates.'\nauthor:\n- 'Brianna S. Mills'\n- 'Shane W. Davis'\n- 'Matthew J. Middleton'\nbibliography:\n- 'bibliography.bib'\ntitle: 'The black hole spin in GRS" +"---\nabstract: 'The interplay of disorder and strong correlations in quantum many-body systems remains an open question. That is despite much progress made in recent years with ultracold atoms in optical lattices to better understand phenomena such as many-body localization or the effect of disorder on Mott metal-insulator transitions. Here, we utilize the numerical linked-cluster expansion technique, extended to treat disordered quantum lattice models, and study exact thermodynamic properties of the disordered Fermi-Hubbard model on the square and cubic geometries. We consider box distributions for the disorder in the onsite energy, the interaction strength, as well as the hopping amplitude and explore how energy, double occupancy, entropy, heat capacity and magnetic correlations of the system in the thermodynamic limit evolve as the strength of disorder changes. We compare our findings with those obtained from determinant quantum Monte Carlo simulations and discuss the relevance of our results to experiments with cold fermionic atoms in optical lattices.'\nauthor:\n- Jacob Park\n- Ehsan Khatami\ntitle: 'Thermodynamics of the disordered Hubbard model studied via numerical linked-cluster expansions'\n---\n\nIntroduction\n============\n\nThe interplay between electronic correlations and quenched (static) disorder is not well understood. From the experimental point of view, condensed matter experiments aiming" +"---\nabstract: 'Edge devices, such as cameras and mobile units, are increasingly capable of performing sophisticated computation in addition to their traditional roles in sensing and communicating signals. The focus of this paper is on collaborative object detection, where deep features computed on the edge device from input images are transmitted to the cloud for further processing. We consider the impact of packet loss on the transmitted features and examine several ways for recovering the missing data. In particular, through theory and experiments, we show that methods for image inpainting based on partial differential equations work well for the recovery of missing features in the latent space. The obtained results represent the new state of the art for missing data recovery in collaborative object detection.'\nauthor:\n- '\\'\nbibliography:\n- 'ref.bib'\ntitle: ' Latent-Space Inpainting for Packet Loss Concealment in Collaborative Object Detection [^1] '\n---\n\nCollaborative object detection, collaborative intelligence, latent space, missing data recovery, loss resilience\n\nIntroduction\n============\n\nIn video surveillance and monitoring systems, input video is usually sent to the cloud for temporary storage or further visual analysis. With the emergence of \u201csmart cameras,\u201d simpler forms of visual analysis can now be performed on-board, without the need" +"---\nabstract: 'Electrons confined in silicon quantum dots exhibit orbital, spin, and valley degrees of freedom. The valley degree of freedom originates from the bulk bandstructure of silicon, which has six degenerate electronic minima. The degeneracy can be lifted in silicon quantum wells due to strain and electronic confinement, but the \u201cvalley splitting\" of the two lowest lying valleys is known to be sensitive to atomic-scale disorder. Large valley splittings are desirable to have a well-defined spin qubit. In addition, an understanding of the inter-valley tunnel coupling that couples different valleys in adjacent quantum dots is extremely important, as the resulting gaps in the energy level diagram may affect the fidelity of charge and spin transfer protocols in silicon quantum dot arrays. Here we use microwave spectroscopy to probe spatial variations in the valley splitting, and the intra- and inter-valley tunnel couplings ($t_{ij}$ and $t''_{ij}$) that couple dots $i$ and $j$ in a triple quantum dot (TQD). We uncover large spatial variations in the ratio of inter-valley to intra-valley tunnel couplings $t_{12}''/t_{12}=0.90$ and $t_{23}''/t_{23}=0.56$. By tuning the interdot tunnel barrier we also show that $t''_{ij}$ scales linearly with $t_{ij}$, as expected from theory. The results indicate strong interactions between different" +"---\nabstract: 'Federated multi-armed bandits (FMAB) is a new bandit paradigm that parallels the federated learning (FL) framework in supervised learning. It is inspired by practical applications in cognitive radio and recommender systems, and enjoys features that are analogous to FL. This paper proposes a general framework of FMAB and then studies two specific federated bandit models. We first study the approximate model where the heterogeneous local models are random realizations of the global model from an unknown distribution. This model introduces a new uncertainty of *client sampling*, as the global model may not be reliably learned even if the finite local models are perfectly known. Furthermore, this uncertainty cannot be quantified *a priori* without knowledge of the suboptimality gap. We solve the approximate model by proposing Federated Double UCB (Fed2-UCB), which constructs a novel \u201cdouble UCB\u201d principle accounting for uncertainties from both arm and client sampling. We show that gradually admitting new clients is critical in achieving an $O(\\log(T))$ regret while explicitly considering the communication [cost]{}. The exact model, where the global bandit model is the exact average of heterogeneous local models, is then studied as a special case. We show that, somewhat surprisingly, the order-optimal regret can be" +"---\nabstract: '**Building a Quantum Internet requires the development of new networking concepts at the intersection of frontier communication systems and long-distance quantum communication. Here, we present the implementation of a quantum-enabled internet prototype, where we have combined Software-Defined and Time-Sensitive Networking principles with Quantum Communication between quantum memories. Using a deployed quantum network connecting Stony Brook University and Brookhaven National Laboratory, we demonstrate a fundamental long-distance quantum network service, that of high-visibility Hong-Ou-Mandel Interference of telecom photons produced in two independent quantum memories separated by a distance of 158 km.**'\nauthor:\n- Dounan Du\n- 'Leonardo Castillo-Veneros'\n- Dillion Cottrill\n- 'Guo-Dong Cui'\n- Gabriel Bello\n- Mael Flament\n- Paul Stankus\n- Dimitrios Katramatos\n- 'Juli\u00e1n Mart\u00ednez-Rinc\u00f3n'\n- Eden Figueroa\nbibliography:\n- 'scibib.bib'\ntitle: 'A long-distance quantum-capable internet testbed'\n---\n\nIntroduction\n============\n\nQuantum technologies have great potential to enhance information processing, secure communication, and fundamental scientific research\u00a0[@Acin2018]. The functional modularity and scalability of Quantum Networks (QNs) make them ideal foundations\u00a0[@Simon2017] to achieve quantum advantage in large distributed quantum processing systems\u00a0[@OhioWorskhop]. Along these lines, realizations, such as Memory-Assisted Measurement Device Independent Quantum Key Distribution (MA-MDI-QKD)\u00a0[@Lo_2012], and entanglement distribution using Quantum Repeaters (QRs)\u00a0[@Lloyd2001], will be" +"---\nabstract: 'Genetic programming is the practice of evolving formulas using crossover and mutation of genes representing functional operations. Motivated by genetic evolution we develop and solve two combinatorial games, and we demonstrate some advantages and pitfalls of using genetic programming to investigate Grundy values. We conclude by investigating a combinatorial game whose ruleset and starting positions are inspired by genetic structures.'\ntitle: An investigation into the application of genetic programming to combinatorial game theory\n---\n\n[Melissa A. Huggan]{}^1^, [Craig Tennenhouse]{}^2^\n\n^1^Ryerson University, Toronto, ON, Canada, [melissa.huggan@ryerson.ca](mailto:Melissa.Huggan@ryerson.ca)\n\n^2^University of New England, Biddeford, ME 04005, USA, \n\n[Keywords]{}: Combinatorial Game Theory, Genetic Algorithms, Genetic Programming\n\nIntroduction {#sec:intro}\n============\n\nThe fundamental unit of biological evolution is a gene, which represents a small piece of information, and the genome is a collection of genes that encodes an organism\u2019s complete genetic information. Within the context of biological evolution, the genes of the most fit organisms survive and are passed onto the next generation, with their chromosomes modifying over time to better fit their environment through competition. This modification occurs through the processes of mutation and crossover, wherein individual genes are altered and pairs of chromosomes trade information, respectively, as organisms pass down their genetic" +"---\nabstract: 'In the two-phase scenario of galaxy formation, a galaxy\u2019s stellar mass growth is first dominated by in-situ star formation, and subsequently by accretion. We analyse the radial distribution of the accreted stellar mass in $\\sim$500 galaxies from the hydrodynamical cosmological simulation Magneticum. Generally, we find good agreement with other simulations in that higher mass galaxies have larger accreted fractions, but we predict higher accretion fractions for low-mass galaxies. Based on the radial distribution of the accreted and in-situ components, we define 6 galaxy classes, from completely accretion dominated to completely in-situ dominated, and measure the transition radii between in-situ and accretion-dominated regions for galaxies that have such a transition. About 70% of our galaxies have one transition radius. However, we also find about 10% of the galaxies to be accretion dominated everywhere, and about 13% to have two transition radii, with the centre and the outskirts both being accretion dominated. We show that these classes are strongly correlated with the galaxy merger histories, especially with the mergers\u2019 cold gas fractions. We find high total in-situ (low accretion) fractions to be associated with smaller, lower mass galaxies, lower central dark matter fractions, and larger transition radii. Finally, we show" +"---\nabstract: 'One of the main limitations in the field of audio signal processing is the lack of large public datasets with audio representations and high-quality annotations due to restrictions of copyrighted commercial music. We present [[Melon Playlist Dataset]{}]{}, a public dataset of mel-spectrograms for 649,091 tracks and 148,826 associated playlists annotated by 30,652 different tags. All the data is gathered from Melon, a popular Korean streaming service. The dataset is suitable for music information retrieval tasks, in particular, auto-tagging and automatic playlist continuation. Even though the latter can be addressed by collaborative filtering approaches, audio provides opportunities for research on track suggestions and building systems resistant to the cold-start problem, for which we provide a baseline. Moreover, the playlists and the annotations included in the [[Melon Playlist Dataset]{}]{} make it suitable for metric learning and representation learning.'\naddress: |\n $^{\\star}$ Music Technology Group - Universitat Pompeu Fabra, Spain\\\n $^{\\dagger}$ Kakao Corp, Korea\nbibliography:\n- 'strings.bib'\n- 'refs.bib'\ntitle: 'Melon Playlist Dataset: a public dataset for audio-based playlist generation and music tagging'\n---\n\nDatasets, music information retrieval, music playlists, auto-tagging, audio signal processing\n\nIntroduction {#sec:intro}\n============\n\nOpen access to adequately large datasets is one of the main challenges in the" +"---\nauthor:\n- Carsten Feldkamp\nbibliography:\n- 'Literatur.bib'\ntitle: Freiheitssatz for amalgamated products of free groups over maximal cyclic subgroups\n---\n\nIntroduction\n============\n\nFor a group $G$ and an element $r \\in G$ we denote the normal closure of $r$ in $G$ by $\\langle \\! \\langle r \\rangle \\! \\rangle_{G}$. We mostly write $G / \\langle \\! \\langle r \\rangle \\! \\rangle$ instead of $G / \\langle \\! \\langle r \\rangle \\! \\rangle_{G}$ if it is clear from the context, that the normal closure is taken over $G$. Further, we write $[a,b]=a^{-1}b^{-1}ab$ for the commutator $[a,b]$ of two elements $a,b$.\n\nIn 1930, W. Magnus proved the classical *Freiheitssatz*: If $F$ is a free group with basis $\\mathcal{X}$ and $r$ a cyclically reduced element containing a basis element $x \\in \\mathcal{X}$, then the subgroup freely generated by $\\mathcal{X} \\backslash \\{x\\}$ embeds canonically into the quotient group $F / \\langle \\! \\langle r \\rangle \\! \\rangle$. This result became a cornerstone of one-relator group theory and led to different kinds of natural generalizations.\n\nOne way to generalize the Freiheitssatz of W. Magnus is to study so-called *one-relator products*. A *one-relator product* of groups $A_{j}$ ($j\\in \\mathcal{J}$) for some index set $\\mathcal{J}$ is" +"---\nauthor:\n- 'Matthew J. Dolan,'\n- 'Frederick J. Hiskens,[!!]{}'\n- 'and Raymond R. Volkas'\nbibliography:\n- 'bibliography.bib'\ntitle: 'Constraining axion-like particles using the white dwarf initial-final mass relation'\n---\n\nIntroduction\n============\n\n![Constraints on ALP mass $m_a$ and coupling strength to photons $g_{a\\gamma\\gamma}$ in the keV-MeV mass range. Individual bounds are referenced in the text. These are shown at 95% confidence level. The constraint derived in this work is labelled \u2019WD-IFMR\u2019.[]{data-label=\"fig: ALP_param_space\"}](Introduction/Figures/ALP_Param.pdf)\n\nAxion-like particles (ALPs) are light, weakly interacting pseudoscalars which feature in many extensions of the Standard Model (SM) of particle physics. They arise as pseudo-Nambu Goldstone bosons (pNGBs) of spontaneously broken symmetries in, for example, the Peccei-Quinn solution of the strong CP problem [@PQ1; @Peccei:1977ur; @Weinberg-40.223; @Wilczek:1977pj], compactification scenarios in string theory [@Svrcek:2006yi; @Arvanitaki:2009fg; @Cicoli:2012sz] and in models of electroweak relaxation [@Graham:2015cka].\n\nThe properties of specific ALPs, such as their masses and coupling strengths to SM particles, are model-dependent, which has sparked investigations of their influence in a wide phenomenological range. Light ALPs with masses below the MeV scale impact astrophysical and cosmological phenomena [@Cadamuro:2011fd], such as Big Bang Nucleosynthesis (BBN) [@Updated_BBN], the Cosmic Microwave Background (CMB) and stellar evolution [@Raffelt-Bounds-on-light; @RAFFELT1982323; @Raffelt:1996wa; @Ayala:2014pea; @Aoyama:2015asa; @Carenza:2020zil; @Friedland:2012hj; @Dominguez;" +"---\nabstract: 'Powder-based additive manufacturing techniques provide tools to construct intricate structures that are difficult to manufacture using conventional methods. In Laser Powder Bed Fusion, components are built by selectively melting specific areas of the powder bed, to form the two-dimensional cross section of the specific part. However, the high occurrence of defects impact the adoption of this method for precision applications. Therefore, a control policy for dynamically altering process parameters to avoid phenomena that lead to defect occurrences is necessary. A Deep Reinforcement Learning (DRL) framework that derives a versatile control strategy for minimizing the likelihood of these defects is presented. The generated control policy alters the velocity of the laser during the melting process to ensure the consistency of the melt pool and reduce overheating in the generated product. The control policy is trained and validated on efficient simulations of the continuum temperature distribution of the powder bed layer under various laser trajectories.'\nauthor:\n- |\n Francis Ogoke\\\n Department of Mechanical Engineering\\\n Carnegie Mellon University\\\n Pittsburgh, PA 15213\\\n Amir Barati Farimani\\\n Department of Mechanical Engineering\\\n Carnegie Mellon University\\\n Pittsburgh, PA 15213\\\nbibliography:\n- 'BibFilesRLAM.bib'\ntitle: Thermal Control of Laser Powder Bed Fusion using Deep Reinforcement Learning\n---\n\nIntroduction" +"---\nabstract: 'Cross-document co-reference resolution (CDCR) is the task of identifying and linking mentions to entities and concepts across many text documents. Current state-of-the-art models for this task assume that all documents are of the same type (e.g. news articles) or fall under the same theme. However, it is also desirable to perform CDCR across different domains (type or theme). A particular use case we focus on in this paper is the resolution of entities mentioned across scientific work and newspaper articles that discuss them. Identifying the same entities and corresponding concepts in both scientific articles and news can help scientists understand how their work is represented in mainstream media. We propose a new task and English language dataset for cross-document cross-domain co-reference resolution (CD$^2$CR). The task aims to identify links between entities across heterogeneous document types. We show that in this cross-domain, cross-document setting, existing CDCR models do not perform well and we provide a baseline model that outperforms current state-of-the-art CDCR models on CD$^2$CR. Our data set, annotation tool and guidelines as well as our model for cross-document cross-domain co-reference are all supplied as open access open source resources.'\nauthor:\n- '**James Ravenscroft**'\n- '**Arie Cattan**'\n- '**Amanda" +"---\nabstract: 'Hawkes processes have been shown to be efficient in modeling bursty sequences in a variety of applications, such as finance and social network activity analysis. Traditionally, these models parameterize each process independently and assume that the history of each point process can be fully observed. Such models could however be inefficient or even prohibited in certain real-world applications, such as in the field of education, where such assumptions are violated. Motivated by the problem of detecting and predicting student procrastination in students Massive Open Online Courses (MOOCs) with missing and partially observed data, in this work, we propose a novel personalized Hawkes process model (*RCHawkes-Gamma*) that discovers meaningful student behavior clusters by jointly learning all partially observed processes simultaneously, without relying on auxiliary features. Our experiments on both synthetic and real-world education datasets show that RCHawkes-Gamma can effectively recover student clusters and their temporal procrastination dynamics, resulting in better predictive performance of future student activities. Our further analyses of the learned parameters and their association with student delays show that the discovered student clusters unveil meaningful representations of various procrastination behaviors in students.'\nauthor:\n- 'Mengfan Yao,^1^ Siqian Zhao, ^1^ Shaghayegh Sahebi, ^1^ Reza Feyzi Behnagh ^2^\\'\nbibliography:" +"---\nabstract: 'The core of security proofs of quantum key distribution (QKD) is the estimation of a parameter that determines the amount of privacy amplification that the users need to apply in order to distill a secret key. To estimate this parameter using the observed data, one needs to apply concentration inequalities, such as random sampling theory or Azuma\u2019s inequality. The latter can be straightforwardly employed in a wider class of QKD protocols, including those that do not rely on mutually unbiased encoding bases, such as the loss-tolerant (LT) protocol. However, when applied to real-life finite-length QKD experiments, Azuma\u2019s inequality typically results in substantially lower secret-key rates. Here, we propose an alternative security analysis of the LT protocol against general attacks, for both its prepare-and-measure and measure-device-independent versions, that is based on random sampling theory. Consequently, our security proof provides considerably higher secret-key rates than the previous finite-key analysis based on Azuma\u2019s inequality. This work opens up the possibility of using random sampling theory to provide alternative security proofs for other QKD protocols.'\nauthor:\n- 'Guillermo Curr\u00e1s-Lorenzo'\n- \u00c1lvaro Navarrete\n- Margarida Pereira\n- Kiyoshi Tamaki\nbibliography:\n- 'refs.bib'\ntitle: 'Finite-key analysis of loss-tolerant quantum key distribution based on random" +"---\nabstract: 'Carbon nanotubes tend to collapse when their diameters exceed a certain threshold, or when a sufficiently large external pressure is applied on their walls. Their radial stability of tubes has been studied in each of these cases, however a general theory able to predict collapse is still lacking. Here, we propose a simple model predicting stability limits as a function of the tube diameter, the number of walls and the pressure. The model is supported by atomistic simulations, experiments, and is used to plot collapse phase diagrams. We have identified the most stable carbon nanotube, which can support a maximum pressure of $\\sim$18 GPa before collapsing. The latter was identified as a multiwall tube with an internal tube diameter of $\\sim$12nm and $\\sim$30 walls. This maximum pressure is lowered depending on the internal tube diameter and the number of walls. We then identify a tube diameter domain in which the radial mechanical stability can be treated as equivalent to macroscopic tubes, known to be described by the canonical L\u00e9vy-Carrier law. This multiscale behavior is shown to be in good agreement with experiments based on O-ring gaskets collapse, proposed as a simple macroscopic parallel to nanotubes in this domain.'" +"---\nabstract: |\n Let $L_{a,b}$ be a line in the Euclidean plane with slope $a$ and intercept $b$. The dimension spectrum $\\operatorname{sp}(L_{a,b})$ is the set of all effective dimensions of individual points on $L_{a,b}$. Jack Lutz, in the early 2000s posed the *dimension spectrum conjecture*. This conjecture states that, for every line $L_{a,b}$, the spectrum of $L_{a,b}$ contains a unit interval.\n\n In this paper we prove that the dimension spectrum conjecture is true. Specifically, let $(a,b)$ be a slope-intercept pair, and let $d = \\min\\{\\dim(a,b), 1\\}$. For every $s \\in [0, 1]$, we construct a point $x$ such that $\\dim(x, ax + b) = d + s$. Thus, we show that $\\operatorname{sp}(L_{a,b})$ contains the interval $[d, 1+ d]$.\nauthor:\n- |\n D. M. Stull\\\n Department of Computer Science\\\n Northwestern University\\\n `donald.stull@northwestern.edu`\nbibliography:\n- 'DSCPL.bib'\ntitle: The Dimension Spectrum Conjecture for Planar Lines\n---\n\nIntroduction\n============\n\nThe effective dimension, $\\dim(x)$, of a point $x\\in {\\mathbb{R}}^n$ gives a fine-grained measure of the algorithmic randomness of $x$. Effective dimension was first defined by J. Lutz [@Lutz03a], and was originally used to quantify the sizes of complexity classes. Unsurprisingly, because of its strong connection to (classical) Hausdorff dimension, effective dimension has proven to be" +"---\nabstract: 'We make an updated review and a systematic and comprehensive analysis of the decays of Higgs bosons in the Standard Model (SM) and its three well-defined prototype extensions such as the complex singlet extension of the SM (cxSM), the four types of two Higgs-doublet models (2HDMs) without tree-level Higgs-mediated flavor-changing neutral current (FCNC) and the minimal supersymmetric extension of the SM (MSSM). We summarize the theoretical predictions for the decay widths of the SM Higgs boson and those of Higgs bosons appearing in its extensions taking account of all possible decay modes. We incorporate them to study and analyze decay patterns of CP-even, CP-odd, and CP-mixed neutral Higgs bosons and charged ones. We put special focus on the properties of a neutral Higgs boson with mass about 125 GeV discovered at the LHC and present constraints obtained from precision analysis of it. This review is intended to be self-contained and consolidated by coherently integrating relevant physics information for studying decays of Higgs bosons in the SM and beyond.'\nauthor:\n- |\n Seong Youl Choi,$^{1}$[^1]\u00a0 Jae Sik Lee,$^{2,3,4}$[^2]\u00a0 Jubin Park$^{3,4}$[^3]\\\n \\\n $^1$ Department of Physics and RIPC, Jeonbuk National University, Jeonju 54896, Korea\\\n $^2$ Department of Physics, Chonnam National" +"---\nabstract: 'In recent years, the Deep Learning Alternating Minimization (DLAM), which is actually the alternating minimization applied to the penalty form of the deep neutral networks training, has been developed as an alternative algorithm to overcome several drawbacks of Stochastic Gradient Descent (SGD) algorithms. This work develops an improved DLAM by the well-known inertial technique, namely iPDLAM, which predicts a point by linearization of current and last iterates. To obtain further training speed, we apply a warm-up technique to the penalty parameter, that is, starting with a small initial one and increasing it in the iterations. Numerical results on real-world datasets are reported to demonstrate the efficiency of our proposed algorithm.'\naddress: |\n College of Computer, National University of Defense Technology\\\n *qiao.linbo@nudt.edu.cn*, *nudtsuntao@163.com*, *{hengyuepan, dsli}@nudt.edu.cn*\\\nbibliography:\n- 'inerbib.bib'\ntitle: Inertial Proximal Deep Learning Alternating Minimization for Efficient Neutral Network Training\n---\n\nNonconvex alternating minimization, Penalty, Inertial method, Network training.\n\nIntroduction\n============\n\nThe deep neural network has achieved great success in computer vision and machine learning. Mathematically, training a $L$-layer neural network can be formulated as: $$\\small\n\\min_{{\\bf W}_1,{\\bf W}_2,\\ldots,{\\bf W}_L} \\{{ \\mathcal{L}}({\\bf y},\\sigma_{L}({\\bf W}_L ...\\sigma_1({\\bf W}_1{\\bf a}_0)))+\\sum_{l=1}^L R_l({\\bf W}_l) \\},$$ where ${\\bf a}_0$ denotes the training sample and ${\\bf" +"---\nabstract: 'We construct geometric compactifications of the moduli space $F_{2d}$ of polarized K3 surfaces, in any degree $2d$. Our construction is via KSBA theory, by considering canonical choices of divisor $R\\in |nL|$ on each polarized K3 surface $(X,L)\\in F_{2d}$. The main new notion is that of a [*recognizable divisor*]{} $R$, a choice which can be consistently extended to all central fibers of Kulikov models. We prove that any choice of recognizable divisor leads to a semitoroidal compactification of the period space, at least up to normalization. Finally, we prove that the rational curve divisor is recognizable for all degrees.'\naddress:\n- 'Department of Mathematics, University of Georgia, Athens GA 30602, USA'\n- 'Department of Mathematics, University of Georgia, Athens GA 30602, USA'\nauthor:\n- Valery Alexeev\n- Philip Engel\ndate: 'March 31, 2023'\ntitle: Compact moduli of K3 surfaces\n---\n\nIntroduction {#sec:introduction}\n============\n\nLet $F_{2d}$ be the coarse moduli space of complex K3 surfaces $X$ having ADE singularities with an ample line bundle $L$ of degree $L^2=2d$. A well known corollary of the Torelli theorem [@piateski-shapiro1971torelli] is that $F_{2d}={{\\mathbb D}}/\\Gamma$ is the quotient of a $19$-dimensional symmetric type IV domain ${{\\mathbb D}}$ by an arithmetic group $\\Gamma\\subset O(2,19)$. In" +"---\nauthor:\n- |\n W. Narloch[^1], G. Pietrzy\u0144ski, W. Gieren, A.\u00a0E. Piatti, M. G\u00f3rski, P. Karczmarek, D. Graczyk,\\\n K. Suchomska, B. Zgirski, P. Wielg\u00f3rski, B. Pilecki, M. Taormina, M. Ka\u0142uszy\u0144ski, W. Pych, G. Hajdu\\\n- 'G. Rojas Garc\u00eda'\ndate: 'Received ; Accepted 14 January 2021'\ntitle: Metallicities and ages for 35 star clusters and their surrounding fields in the Small Magellanic Cloud\n---\n\n[In this work we study 35 stellar clusters in the Small Magellanic Cloud (SMC) in order to provide their mean metallicities and ages. We also provide mean metallicities of the fields surrounding the clusters.]{} [We used Str\u00f6mgren photometry obtained with the 4.1 m SOAR telescope and take advantage of $(b-y)$ and $m1$ colors for which there is a\u00a0metallicity calibration presented in the literature.]{} [The spatial metallicity and age distributions of clusters across the SMC are investigated using the results obtained by Str\u00f6mgren photometry. We confirm earlier observations that younger, more metal-rich star clusters are concentrated in the central regions of the galaxy, while older, more metal-poor clusters are located farther from the SMC center. We construct the age\u2013metallicity relation for the studied clusters and find good agreement with theoretical models of chemical enrichment, and with" +"---\nabstract: |\n The goal of this paper is to simulate the voters\u2019 behaviour given a voting method. Our approach uses a multi-agent simulation in order to model a voting process through many iterations, so that the voters can vote by taking into account the results of polls. Here we only tried basic rules and a single voting method, but further attempts could explore new features.\n\n **Keywords:** Computational social choice, Iterative voting, multi-agent simulation\nauthor:\n- |\n Albin Soutif Carole Adam Sylvain Bouveret\\\n Univ. Grenoble-Alpes, Grenoble Informatics Laboratory\ndate: |\n **\\\n ** \ntitle: 'Multi-agent simulation of voter\u2019s behaviour'\n---\n\nINTRODUCTION\n============\n\nA voting process involves the participation of many people that interact together in order to reach a common decision. In this paper, we focus on voting processes in which a single person is elected. A voting method is defined as the set of rules that determine the winner of the election, given an input from each voter, for example their preferred candidate or an order relation between all candidates. Social Choice Theory is the field that studies the aggregation of individual preferences towards a collective choice, like for example electing a candidate or choosing a movie. Computational social choice" +"---\nabstract: |\n In this work we analyse the notion of measurement non-contextuality (MNC) and identify contextual scenarios which involve sequential measurements of only a single measurement device. We show that any non-contextual ontological model fails to explain the statistics of outcomes of a single carefully constructed positive operator valued measure (POVM) executed sequentially on a quantum system. The context of measurement arises from the different configurations in which the device can be used. We develop an inequality from the non-contextual (NC) ontic model, and construct a quantum situation involving measurements from the KCBS inequality. We show that the resultant statistics arising from this device violate our NC inequality. This device can be generalised by incorporating measurements from arbitrary $n$-cycle contextuality inequalities of which $n =\n 5$ corresponds to the KCBS inequality. We show that the NC and quantum bounds for various scenarios can be derived more easily using only the functional relationships between the outcomes for larger values $n$. This makes it one of the simpler contextual inequalities to analyse.\nauthor:\n- Jaskaran Singh\n- Rajendra Singh Bhati\n- Arvind\ntitle: Revealing quantum contextuality using a single measurement device \n---\n\nIntroduction\n============\n\nQuantum theory is contextual since the outcomes" +"---\nauthor:\n- Stefano Bianchini\n- Moritz M\u00fcller\n- Pierre Pelletier\n- Kevin Wirtz\ntitle: 'Global health science leverages established collaboration network to fight COVID-19'\n---\n\nAbstract {#abstract .unnumbered}\n========\n\n[**How has the science system reacted to the early stages of the COVID-19 pandemic? Here we compare the (growing) international network for coronavirus research with the broader international health science network. Our findings show that, before the outbreak, coronavirus research realized a relatively small and rather peculiar niche within the global health sciences. As a response to the pandemic, the international network for coronavirus research expanded rapidly along the hierarchical structure laid out by the global health science network. Thus, in face of the crisis, the global health science system proved to be structurally stable yet versatile in research. The observed versatility supports optimistic views on the role of science in meeting future challenges. However, the stability of the global core-periphery structure may be worrying, because it reduces learning opportunities and social capital of scientifically peripheral countries \u2014 not only during this pandemic but also in its \u201cnormal\u201d mode of operation.**]{}\\\n[ ***Keywords*** COVID-19 $|$ Scientific Networks $|$ International Collaboration $|$ Health Sciences ]{}\n\nIntroduction {#introduction .unnumbered}\n============\n\nInternational scientific" +"---\nabstract: 'We select 37 most common and realistic dense matter equation of states to integrate the general relativistic stellar structure equations for static spherically symmetric matter configurations. For all these models, we check the compliance of the acceptability conditions that every stellar model should satisfy. It was found that some of the non-relativistic equation of states violate the causality and/or the dominant energy condition and that adiabatic instabilities appear in the inner crust for all equation of state considered.'\naddress:\n- '$^1$ Escuela de F\u00edsica, Universidad Industrial de Santander, Bucaramanga, Colombia'\n- '$^2$ Departamento de F\u00edsica, Universidad de los Andes, M\u00e9rida, Venezuela'\n- '$^3$ Departamento de Matem\u00e1tica Aplicada, Universidad de Salamanca, Salamanca, Espa\u00f1a'\nauthor:\n- 'D L Ramos-Salamanca$^{1}$, L A N\u00fa\u00f1ez$^{1,2}$ and J Ospino$^{3}$'\ntitle: Physical acceptability conditions for realistic neutron star equations of state\n---\n\nIntroduction\n============\n\nNeutron stars are among the densest astronomical objects in the universe. These stars are formed from the gravitational collapse of massive stars $M > 8 M_{\\odot}$ (supernova event) and leave a compact remnant whose mass and radius usually lies between $1 - 2 \\,M_{\\odot}$ and $10-14\\,\\rm{km}$, respectively [@HaenselPotekhinYakovlev2007].\n\nThe inner structure of a compact object is heavily influenced by the equation" +"---\nabstract: 'We introduce a protocol to transfer excitations between two noninteracting qubits via purely dissipative processes (i.e., in the Lindblad master equation there is no coherent interaction between the qubits). The fundamental ingredients are the presence of collective (i.e. nonlocal) dissipation and unbalanced local dissipation rates (the qubits dissipate at different rates). The resulting quantum trajectories show that the measurement backaction changes the system wave function and induces a passage of the excitation from one qubit to the other. While similar phenomena have been witnessed for a non-Markovian environment, here the dissipative quantum state transfer is induced by an update of the observer knowledge of the wave function in the presence of a Markovian (memoryless) environment\u2014this is a single quantum trajectory effect. Beyond single quantum trajectories and postselection, such an effect can be observed by histogramming the quantum jumps along several realizations at different times. By investigating the effect of the temperature in the presence of unbalanced local dissipation, we demonstrate that, if appropriately switched on and off, the collective dissipator can act as a Maxwell\u2019s demon. These effects are a generalized measure equivalent to the standard projective measure description of quantum teleportation and Maxwell\u2019s demon. They can be" +"---\nabstract: 'Effectiveness of teaching digital signal processing can be enhanced by reducing lecture time devoted to theory, and increasing emphasis on applications, programming aspects, visualization and intuitive understanding. An integrated approach to teaching requires instructors to simultaneously teach theory and its applications in storage and processing of audio, speech and biomedical signals. Student engagement can be enhanced by engaging students to work in groups during the class where students can solve short problems and short programming assignments or take quizzes. These approaches will increase student interest in learning the subject and student engagement.'\nauthor:\n- 'Keshab K. Parhi,\u00a0 [^1]'\nbibliography:\n- 'sample.bib'\ntitle: 'Teaching Digital Signal Processing by Partial Flipping, Active Learning and Visualization'\n---\n\n[Parhi : Title of the Paper]{}\n\nEducation, Digital Signal Processing, Active Learning, Flipping, Blended Learning, Visualization, Programming based Problem Solving\n\nIntroduction\n============\n\nsignal processing (DSP) is used in numerous applications such as communications, biomedical signal analysis, healthcare, network theory, finance, surveillance, robotics, and feature extraction for data analysis. Learning DSP is more important than ever before because it provides the foundation for machine learning and artificial intelligence.\n\nThe DSP community has benefited tremendously from Oppenheim\u2019s views of education\u00a0[@oppenheim1992personal; @oppenheim2006one] and from his many" +"---\nabstract: 'Night vision imaging is a technology that converts non-visible object to human eyes into visible image in night and other low light environments. However, the conventional night vision imaging can only directly produce grayscale image. Here, we propose a novel night vision imaging method based on intensity correlation of light. The object\u2019s information detected by infrared non-visible light is expressed by visible light via the spatial intensity correlation of light. With simple data processing, a color night vision image can be directly produced by this approach without any pseudo-color image processing. Theoretical and experimental results show that a color night vision image comparable to classical visible light imaging quality can be obtained by this method. Surprisingly, the color colorfulness index of the reconstructed night vision image is significantly better than that of the conventional visible light image and pseudo-color night vision image. Although the reconstructed image can not completely restore the natural color of the object, the color image obtained by this method is more natural sense than that obtained by other pseudo-color image processing methods.'\nauthor:\n- 'Deyang Duan, Yunjie Xia'\ntitle: True color night vision correlated imaging based on intensity correlation of light\n---\n\nIntroduction\n============" +"---\nabstract: 'In this paper, we study parallel-in-time (PinT) algorithm for all-at-once system from a non-local evolutionary equation with weakly singular kernel where the temporal term involves a non-local convolution with a weakly singular kernel and the spatial term is the usual Laplacian operator with variable coefficients. Such a problem has been intensively studied in recent years thanks to the real world applications. However, due to the non-local property of the time evolution, solving the equation in PinT manner is difficult. We propose to use a two-sided preconditioning technique for the all-at-once discretization of the equation. Our preconditioner is constructed by replacing the variable diffusion coefficients with a constant coefficient to obtain a constant-coefficient all-at-once matrix. We split a square root of constant Laplacian operator out of the constant-coefficient all-at-once matrix as a right preconditioner and take the remaining part as a left preconditioner, which constitutes our two-sided preconditioning. Exploiting the diagonalizability of the constant-Laplacian matrix and the triangular Toeplitz structure of the temporal discretization matrix, we obtain efficient representations of inverses of right and left preconditioners, because of which the iterative solution can be fast updated in PinT manner. Theoretically, the condition number of two-sided preconditioned matrix is proven" +"---\nabstract: |\n The segmentation of emails into functional zones (also dubbed **email zoning**) is a relevant preprocessing step for most NLP tasks that deal with emails. However, despite the multilingual character of emails and their applications, previous literature regarding email zoning corpora and systems was developed essentially for English.\n\n In this paper, we analyse the existing email zoning corpora and propose a new multilingual benchmark composed of 625 emails in Portuguese, Spanish and French. Moreover, we introduce , the first multilingual email segmentation model based on a language agnostic sentence encoder. Besides generalizing well for unseen languages, our model is competitive with current English benchmarks, and reached new state-of-the-art performances for domain adaptation tasks in English.\nauthor:\n- |\n Bruno Jardim\\\n Cleverly, Lisbon, Portugal\\\n NOVA-IMS, Lisbon, Portugal\\\n `bjardim@novaims.unl.pt`\\\n Ricardo Rei\\\n NOVA-IMS, Lisbon, Portugal\\\n Unbabel, Lisbon, Portugal\\\n `rrei@novaims.unl.pt`\\\n Mariana S. C. Almeida\\\n Cleverly, Lisbon, Portugal\\\n `mariana.almeida@cleverly.ai`\\\nbibliography:\n- 'anthology.bib'\n- 'referencias.bib'\ntitle: Multilingual Email Zoning\n---\n\nIntroduction\n============\n\nWorldwide, email is a predominant means of social and business communication. Its importance has attracted studies in areas of Machine Learning (ML) and Natural Language Processing (NLP), impacting a wide range of applications, from spam filtering [@QaroushKW12] to network analysis [@Christidis2019].\n\n![" +"---\nabstract: 'With the goal of designing novel inhibitors for SARS-CoV-1 and SARS-CoV-2, we propose the general molecule optimization framework, **Mo**lecular **N**eural **A**ssay **S**earch ([MONAS]{}), consisting of three components: a property predictor which identifies molecules with specific desirable properties, an energy model which approximates the statistical similarity of a given molecule to known training molecules, and a molecule search method. In this work, these components are instantiated with graph neural networks (GNNs), Deep Energy Estimator Networks (DEEN) and Monte Carlo tree search (MCTS), respectively. This implementation is used to identify 120K molecules (out of 40-million explored) which the GNN determined to be likely SARS-CoV-1 inhibitors, and, at the same time, are statistically close to the dataset used to train the GNN.'\nauthor:\n- Timothy Atkinson\n- Saeed Saremi\n- Faustino Gomez\n- Jonathan Masci\nbibliography:\n- 'bibliography.bib'\ndate: |\n NNAISENSE S.A.\\\n June 2020\\\ntitle: Automatic design of novel potential 3CL^pro^ and PL^pro^ inhibitors\n---\n\nIntroduction\n============\n\nOver the past year, the search for molecules which may inhibit key receptor sites of Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2) has emerged as a central research objective within the scientific community\u00a0[@JEDI]. The already widespread use of Deep Learning (DL) techniques as predictors" +"---\nabstract: |\n The simplicial rook graph ${\\rm \\mathcal{SR}}(m,n)$ is the graph whose vertices are vectors in $ \\mathbb{N}^m$ such that for each vector the summation of its coordinates is $n$ and two vertices are adjacent if their corresponding vectors differ in exactly two coordinates. Martin and Wagner (Graphs Combin. (2015) 31:1589\u20131611) asked about the independence number of ${\\rm \\mathcal{SR}}(m,n)$ that is the maximum number of non attacking rooks which can be placed on a $(m-1)$-dimensional simplicial chessboard of side length $n+1$. In this work, we solve this problem and show that $\\alpha({\\rm \\mathcal{SR}}(m,n))=\\big(1-o(1)\\big)\\frac{\\binom{n+m-1}{n}}{m}$. We also prove that for the domination number of rook graphs we have $\\gamma({\\rm \\mathcal{SR}}(m, n))= \\Theta (n^{m-2})$. Moreover we show that these graphs are Hamiltonian.\n\n The cyclic simplicial rook graph ${\\rm \\mathcal{CSR}}(m,n)$ is the graph whose vertices are vectors in $\\mathbb{Z}^{m}_{n}$ such that for each vector the summation of its coordinates modulo $n$ is $0$ and two vertices are adjacent if their corresponding vectors differ in exactly two coordinates. In this work we determine several properties of these graphs such as independence number, chromatic number and automorphism group. Among other results, we also prove that computing the distance between two vertices of a given ${\\rm" +"---\nabstract: 'The thermoelectric behaviour of quark-gluon plasma has been studied within the framework of an effective kinetic theory by adopting a quasiparticle model to incorporate the thermal medium effects. The thermoelectric response of the medium has been quantified in terms of the Seebeck coefficient. The dependence of the collisional aspects of the QCD medium on the Seebeck coefficient has been estimated by utilizing relaxation time approximation and Bhatnagar-Gross-Krook collision kernels in the effective Boltzmann equation. The thermoelectric coefficient is seen to depend on the quark chemical potential and collision aspects of the medium. Besides, the thermoelectric effect has been explored in a magnetized medium and the respective transport coefficients, such as magnetic field-dependent Seebeck coefficient and Nernst coefficient, have been estimated. The impacts of hot QCD medium interactions incorporated through the effective model and the magnetic field on the thermoelectric responses of the medium have been observed to be more prominent in the temperature regimes not very far from the transition temperature.'\nauthor:\n- Manu Kurian\ntitle: Thermoelectric behaviour of hot collisional and magnetized QCD medium from an effective kinetic theory\n---\n\nIntroduction\n============\n\nExperimental programs at Relativistic Heavy Ion Collider (RHIC) and Large Hadron Collider (LHC) have confirmed" +"---\nabstract: 'The EXperiment for Cryogenic Large-Aperture Intensity Mapping (EXCLAIM) is a balloon-borne far-infrared telescope that will survey star formation history over cosmological time scales to improve our understanding of why the star formation rate declined at redshift $z < 2$, despite continued clustering of dark matter. Specifically, EXCLAIM will map the emission of redshifted carbon monoxide and singly-ionized carbon lines in windows over a redshift range $0 < z < 3.5$, following an innovative approach known as intensity mapping. Intensity mapping measures the statistics of brightness fluctuations of cumulative line emissions instead of detecting individual galaxies, thus enabling a blind, complete census of the emitting gas. To detect this emission unambiguously, EXCLAIM will cross-correlate with a spectroscopic galaxy catalog. The EXCLAIM mission uses a cryogenic design to cool the telescope optics to approximately $1.7$\u00a0K. The telescope features a $90$-cm primary mirror to probe spatial scales on the sky from the linear regime up to shot noise-dominated scales. The telescope optical elements couple to six -Spec spectrometer modules, operating over a $420$\u2013$540$GHz frequency band with a spectral resolution of $512$ and featuring microwave kinetic inductance detectors. A Radio Frequency System-on-Chip (RFSoC) reads out the detectors in the baseline design." +"---\nabstract: 'Both observations and recent numerical simulations of the circumgalactic medium (CGM) support the hypothesis that a self-regulating feedback loop suspends the gas density of the ambient CGM close to the galaxy in a state with a ratio of cooling time to freefall time $\\gtrsim 10$. This limiting ratio is thought to arise because circumgalactic gas becomes increasingly susceptible to multiphase condensation as the ratio declines. If the timescale ratio gets too small, then cold clouds precipitate out of the CGM, rain into the galaxy, and fuel energetic feedback that raises the ambient cooling time. The astrophysical origin of this so-called precipitation limit is not simple but is critical to understanding the CGM and its role in galaxy evolution. This paper therefore attempts to interpret its origin as simply as possible, relying mainly on conceptual reasoning and schematic diagrams. It illustrates how the precipitation limit can depend on both the global configuration of a galactic atmosphere and the degree to which dynamical disturbances drive CGM perturbations. It also frames some tests of the precipitation hypothesis that can be applied to both CGM observations and numerical simulations of galaxy evolution.'\nauthor:\n- 'G. Mark Voit'\ntitle: A Graphical Interpretation of" +"---\nabstract: 'We introduce a generative smoothness regularization on manifolds (SToRM) model for the recovery of dynamic image data from highly undersampled measurements. The model assumes that the images in the dataset are non-linear mappings of low-dimensional latent vectors. We use the deep convolutional neural network (CNN) to represent the non-linear transformation. The parameters of the generator as well as the low-dimensional latent vectors are jointly estimated only from the undersampled measurements. This approach is different from traditional CNN approaches that require extensive fully sampled training data. We penalize the norm of the gradients of the non-linear mapping to constrain the manifold to be smooth, while temporal gradients of the latent vectors are penalized to obtain a smoothly varying time-series. The proposed scheme brings in the spatial regularization provided by the convolutional network. The main benefit of the proposed scheme is the improvement in image quality and the orders-of-magnitude reduction in memory demand compared to traditional manifold models. To minimize the computational complexity of the algorithm, we introduce an efficient progressive training-in-time approach and an approximate cost function. These approaches speed up the image reconstructions and offers better reconstruction performance.'\nauthor:\n- 'Qing Zou, Abdul Haseeb Ahmed, Prashant Nagpal, Stanley" +"---\nabstract: 'This report is an account of freely representable groups, which are finite groups admitting linear representations whose only fixed point for a nonidentity element is the zero vector. The standard reference for such groups is Wolf\u00a0(1967) where such groups are used to classify spaces of constant positive curvature. Such groups also arise in the theory of norm relations in algebraic number theory, as demonstrated recently by Biasse, Fieker, Hofmann, and Page (2020). This report aims to synthesize the information and results from these and other sources to give a continuous, self-contained development of the subject. I introduce new points of view, terminology, results, and proofs in an effort to give a coherent, detailed, self-contained, and accessible narrative.'\nauthor:\n- 'Wayne Aitken[^1]'\nbibliography:\n- 'FreelyRepresentable.bib'\ntitle: Report on Freely Representable Groups\n---\n\nThis is an account of freely representable groups, which are finite groups admitting linear representations whose only fixed point for a nonidentity element is the zero vector. Such groups attracted my attention as being exactly the groups that do not have a general type of norm relation (as recently shown in\u00a0[@biasse2020norm]). Interestingly, such groups arose earlier as the key to the classification of Riemannian manifolds" +"---\nabstract: 'A noteworthy aspect in blood flow modeling is the definition of the mechanical interaction between the fluid flow and the biological structure that contains it, namely the vessel wall. Particularly, it has been demonstrated that the addition of a viscous contribution to the mechanical characterization of vessels brings positive results when compared to *in-vivo* measurements. In this context, the implementation of boundary conditions able to keep memory of the viscoelastic contribution of vessel walls assumes an important role, especially when dealing with large circulatory systems. In this work, viscoelasticity is taken into account in entire networks via the Standard Linear Solid Model. The implementation of the viscoelastic contribution at boundaries (inlet, outlet and junctions), is carried out considering the hyperbolic nature of the mathematical model. Specifically, a non-linear system is established based on the definition of the Riemann Problem at junctions, characterized by rarefaction waves separated by contact discontinuities, among which the mass and the total energy are conserved. Basic junction tests are analyzed, such as a trivial 2\u2013vessels junction, for both a generic artery and a generic vein, and a simple 3\u2013vessels junction, considering an aortic bifurcation scenario. The chosen IMEX Runge-Kutta Finite Volume scheme is demonstrated" +"---\nabstract: 'Starting from a recently proposed comprehensive theory for the high-Tc superconductivity in cuprates, we derive a general analytic expression for the planar resistivity, in the presence of an applied external magnetic field $\\textbf{H}$ and explore its consequences in the different phases of these materials. As an initial probe of our result, we show it compares very well with experimental data for the resistivity of LSCO at different values of the applied field. We also apply our result to Bi2201 and show that the magnetoresistivity in the strange metal phase of this material, exhibits the $H^2$ to $H$ crossover, as we move from the weak to the strong field regime. Yet, despite of that, the magnetoresistivity does not present a quadrature scaling. Remarkably, the resistivity H-field derivative does scale as a function of $\\frac{H}{T}$, in complete agreement with recent magneto-transport measurements made in the strange metal phase of cuprates [@Hussey2020]. We, finally, address the issue of the $T$-power-law dependence of the resistivity of overdoped cuprates and compare our results with experimental data for Tl2201. We show that this provides a simple method to determine whether the quantum critical point associated to the pseudogap temperature $T^*(x)$ belongs to the SC" +"---\nauthor:\n- 'Carter BLUM$^{1,2}$[^1]'\n- Hao LIU$^1$\n- |\n Hui XIONG$^{1,3}$\\\n $^1$ Business Intelligence Lab, Baidu Inc., Beijing, CN\\\n $^2$University of Minnesota, Minneapolis, MN, US\\\n $^3$Rutgers University, Newark, NJ, US\\\n blumx116@umn.edu, {liuhao30, xionghui01}@baidu.com\n- |\n Carter W. Blum$^{1,2}$[^2] Liu Hao$^1$ Hui Xiong$^{1,3}$\\\n \\\n $^1$ Business Intelligence Lab, Baidu Inc., Beijing, CN\\\n $^2$ University of Minnesota, Minneapolis, MN, US\\\n $^3$ Rutgers University, Newark, NJ, US\nbibliography:\n- 'bibliography.bib'\ndate: December 2019\ntitle: 'CoordiQ : Coordinated Q-learning for Electric Vehicle Charging Recommendation'\n---\n\nAbstract\n========\n\nElectric vehicles have been rapidly increasing in usage, but stations to charge them have not always kept up with demand, so efficient routing of vehicles to stations is critical to operating at maximum efficiency. Deciding which stations to recommend drivers to is a complex problem with a multitude of possible recommendations, volatile usage patterns and temporally extended consequences of recommendations. Reinforcement learning offers a powerful paradigm for solving sequential decision-making problems, but traditional methods may struggle with sample efficiency due to the high number of possible actions. By developing a model that allows complex representations of actions, we improve outcomes for users of our system by over 30% when compared to existing baselines in a simulation." +"---\nabstract: |\n **Motivation**: Peptides have attracted the attention in this century due to their remarkable therapeutic properties. Computational tools are being developed to take advantage of existing information, encapsulating knowledge and making it available in a simple way for general public use. However, these are property-specific redundant data systems, and usually do not display the data in a clear way. In some cases, information download is not even possible. This data needs to be available in a simple form for drug design and other biotechnological applications.\\\n **Results**: We developed Peptipedia, a user-friendly database and web application to search, characterise and analyse peptide sequences. Our tool integrates the information from thirty previously reported databases, making it the largest repository of peptides with recorded activities so far. Besides, we implemented a variety of services to increase our tool\u2019s usability. The significant differences of our tools with other existing alternatives becomes a substantial contribution to develop biotechnological and bioengineering applications for peptides.\\\n **Availability**: Peptipedia is available for non-commercial use as an open-access software, licensed under the GNU General Public License, version GPL 3.0. The web platform is publicly available at [pesb2.cl/peptipedia](pesb2.cl/peptipedia). Both the source code and sample datasets are available in the" +"---\nabstract: 'Let $w = [[x^k, y^l], [x^m, y^n]]$ be a non-trivial double commutator word. We show that $w$ is surjective on $\\operatorname{PSL}_2(K)$, where $K$ is an algebraically closed field of characteristic $0$.'\naddress:\n- 'Urban Jezernik, Alfr\u00e9d R\u00e9nyi Institute of Mathematics, Hungarian Academy of Sciences, Re\u00e1ltanoda utca 13-15, H-1053, Budapest, Hungary'\n- 'Jonatan S\u00e1nchez, Department of Applied Mathematics (DMATIC), ETSI Ingenieros Inform\u00e1ticos, Universidad Polit\u00e9cnica de Madrid, Campus de Montegancedo, Avenida de Montepr\u00edncipe, 28660, Boadilla del Monte, Spain '\nauthor:\n- Urban Jezernik\n- Jonatan S\u00e1nchez\nbibliography:\n- 'refs.bib'\ntitle: 'On surjectivity of word maps on $\\operatorname{PSL}_2$'\n---\n\nIntroduction\n============\n\nWords, word maps and their surjectivity\n---------------------------------------\n\nA *word in two variables* $w$ is an element of the free group $\\FF_2 = \\langle x, y \\rangle$. Given a group $G$, the word $w$ induces a *word map* $\\tilde w$ on $G$ by evalution, $$\\tilde w \\colon G \\times G \\to G, \\quad (g,h) \\mapsto w(g,h).$$\n\nWhen the underlying group $G$ is a connected semisimple algebraic group, say $\\operatorname{SL}_n(K)$ for an algebraically closed field $K$, every non-trivial word map is dominant by a theorem of Borel [@borel1983free].\n\nFor certain words, one can even prove surjectivity and possibly further properties of the" +"---\nabstract: 'Peters\u2019 formula is an analytical estimate of the time-scale of gravitational wave (GW)-induced coalescence of binary systems. It is used in countless applications, where the convenience of a simple formula outweighs the need for precision. However, many promising sources of the Laser Interferometer Space Antenna (LISA), such as supermassive black hole binaries and extreme mass-ratio inspirals (EMRIs), are expected to enter the LISA band with highly eccentric ($e \\gsim 0.9$) and highly relativistic orbits. These are exactly the two limits in which Peters\u2019 estimate performs the worst. In this work, we expand upon previous results and give simple analytical fits to quantify how the inspiral time-scale is affected by the relative 1.5 post-Newtonian (PN) hereditary fluxes and spin-orbit couplings. We discuss several cases that demand a more accurate GW time-scale. We show how this can have a major influence on quantities that are relevant for LISA event-rate estimates, such as the EMRI critical semi-major axis. We further discuss two types of environmental perturbations that can play a role in the inspiral phase: the gravitational interaction with a third massive body and the energy loss due to dynamical friction and torques from a surrounding gas medium ubiquitous in galactic" +"---\nabstract: |\n We develop new strategies to build numerical relativity surrogate models for eccentric binary black hole systems, which are expected to play an increasingly important role in current and future gravitational-wave detectors. We introduce a new surrogate waveform model, `NRSur2dq1Ecc`, using 47 nonspinning, equal-mass waveforms with eccentricities up to $0.2$ when measured at a reference time of $5500M$ before merger. This is the first waveform model that is directly trained on eccentric numerical relativity simulations and does not require that the binary circularizes before merger. The model includes the $(2,2)$, $(3,2)$, and $(4,4)$ spin-weighted spherical harmonic modes. We also build a final black hole model, `NRSur2dq1EccRemnant`, which models the mass, and spin of the remnant black hole. We show that our waveform model can accurately predict numerical relativity waveforms with mismatches $\\approx\n 10^{-3}$, while the remnant model can recover the final mass and dimensionless spin with absolute errors smaller than $\\approx 5 \\times 10^{-4}M$ and $\\approx 2 \\times10^{-3}$ respectively. We demonstrate that the waveform model can also recover subtle effects like mode-mixing in the ringdown signal without any special ad-hoc modeling steps. Finally, we show that despite being trained only on equal-mass binaries, `NRSur2dq1Ecc` can be reasonably extended" +"---\nabstract: 'We quantify the impact of unpolarized lepton-proton and lepton-nucleus inclusive deep-inelastic scattering (DIS) cross section measurements from the future Electron-Ion Collider (EIC) on the proton and nuclear parton distribution functions (PDFs). To this purpose we include neutral- and charged-current DIS pseudodata in a self-consistent set of proton and nuclear global PDF determinations based on the NNPDF methodology. We demonstrate that the EIC measurements will reduce the uncertainty of the light quark PDFs of the proton at large values of the momentum fraction $x$, and, more significantly, of the quark and gluon PDFs of heavy nuclei, especially at small and large $x$. We illustrate the implications of the improved precision of nuclear PDFs for the interaction of ultra-high energy cosmic neutrinos with matter.'\n---\n\nNikhef-2020-041\n\n$\\qquad$\n\n[**Self-consistent determination of proton and nuclear PDFs\\\nat the Electron Ion Collider**]{}\n\nRabah Abdul Khalek$^{1,2}$, Jacob J. Ethier$^{1,2}$, Emanuele R. Nocera$^{2,3}$, and Juan Rojo$^{1,2}$\n\n[ *\u00a0$^1$ Department of Physics and Astronomy, VU Amsterdam, 1081 HV Amsterdam,\\\n\u00a0$^2$ Nikhef Theory Group, Science Park 105, 1098 XG Amsterdam, The Netherlands\\\n\u00a0$^3$ The Higgs Centre for Theoretical Physics,\\\nUniversity of Edinburgh, JCMB, KB, Mayfield Rd, Edinburgh EH9 3FD, United Kingdom* ]{}\n\n[ **Introduction** \u2013]{} The" +"---\nabstract: 'This paper describes our method for tuning a transformer-based pretrained model, to adaptation with Reliable Intelligence Identification on Vietnamese SNSs problem. We also proposed a model that combines bert-base pretrained models with some metadata features, such as the number of comments, number of likes, images of SNS documents,... to improved results for VLSP shared task: Reliable Intelligence Identification on Vietnamese SNSs. With appropriate training techniques, our model is able to achieve $0.9392$ ROC-AUC on public test set and the final version settles at top 2 ROC-AUC ($0.9513$) on private test set.'\nauthor:\n- |\n Thanh Chinh Nguyen\\\n Brains Technology, Inc.\\\n [chinh.nguyen@brains-tech.co.jp]{}\\\n Van Nha Nguyen\\\n Websosanh AI\\\n [nhanv@websosanh.org]{}\\\nbibliography:\n- 'reference.bib'\ntitle: 'NLPBK at VLSP-2020 shared task: Compose transformer pretrained models for Reliable Intelligence Identification on Social network'\n---\n\nIntroduction\n============\n\nIn recent years, the use of SNSs has become a necessary daily activity. As result, SNSs has become the leading tool for spreading news information. In SNSs, News can spread exponentially, but otherwise, a number of users tend to spread unreliable information for their personal purposes affecting the online society. In fact, SNSs has proved to be a powerful source for fake news dissemination ([@10.1145/3132847.3132877], [@10.1145/3137597.3137600]). The need" +"---\nabstract: 'The transfer matrix is a powerful technique that can be applied to statistical mechanics systems as, for example, in the calculus of the entropy of the ice model. One interesting way to study such systems is to map it onto a 3-color problem. In this paper, we explicitly build the transfer matrix for the 3-color problem in order to calculate the number of possible configurations for finite systems with free, periodic in one direction and toroidal boundary conditions (periodic in both directions)'\naddress: |\n 1 - Instituto de F\u00edsica, Universidade Federal do Rio Grande do Sul, Porto Alegre, Rio Grande do Sul, Brazil\\\n 2 - Departamento de F\u00edsica, Faculdade de Filosofia, Ci\u00eancias e Letras de Riber\u00e3o Preto, Universidade de S\u00e3o Paulo, Ribeir\u00e3o Preto, S\u00e3o Paulo, Brazil\nauthor:\n- 'Roberto da Silva$^{1}$, Silvio R. Dahmen$^{1}$, J. R. Drugowich de Fel\u00edcio$^{2}$'\ntitle: Transfer matrix in counting problems\n---\n\nTransfer matrix ,toroidal boundary conditions ,ice-type model ,three-color problem\n\nIntroduction\n============\n\nThe transfer matrix technique in statistical physics was introduced by Kramers and Wannier in 1941 in the context of two-dimensional ferromagnetic systems [@Krammers-I; @Krammers-II]. However its applicability extends beyond spin models [@Dimarzio; @Lieb-I; @Lieb-II; @Baxter; @Pegg; @Teif]. They are very useful" +"---\nabstract: 'Keyword spotting and in particular Wake-Up-Word (WUW) detection is a very important task for voice assistants. A very common issue of voice assistants is that they get easily activated by background noise like music, TV or background speech that accidentally triggers the device. In this paper, we propose a Speech Enhancement (SE) model adapted to the task of WUW detection that aims at increasing the recognition rate and reducing the false alarms in the presence of these types of noises. The SE model is a fully-convolutional denoising auto-encoder at waveform level and is trained using a log-Mel Spectrogram and waveform reconstruction losses together with the BCE loss of a simple WUW classification network. A new database has been purposely prepared for the task of recognizing the WUW in challenging conditions containing negative samples that are very phonetically similar to the keyword. The database is extended with public databases and an exhaustive data augmentation to simulate different noises and environments. The results obtained by concatenating the SE with a simple and state-of-the-art WUW detectors show that the SE does not have a negative impact on the recognition rate in quiet environments while increasing the performance in the presence of" +"---\nauthor:\n- |\n Ines Wilms$^{a}$ and Jacob Bien$^b$\\\n *$^{a}$ Department of Quantitative Economics, Maastricht University, Maastricht, The Netherlands*\\\n *$^{b}$ Data Sciences and Operations, University of Southern California, Los Angeles, CA, USA*\nbibliography:\n- 'refs.bib'\ndate: \ntitle: '**Tree-based Node Aggregation in Sparse Graphical Models**'\n---\n\n#### Abstract.\n\nHigh-dimensional graphical models are often estimated using regularization that is aimed at reducing the number of edges in a network. In this work, we show how even simpler networks can be produced by aggregating the nodes of the graphical model. We develop a new convex regularized method, called the [*tree-aggregated graphical lasso*]{} or tag-lasso, that estimates graphical models that are both edge-sparse and node-aggregated. The aggregation is performed in a data-driven fashion by leveraging side information in the form of a tree that encodes node similarity and facilitates the interpretation of the resulting aggregated nodes. We provide an efficient implementation of the tag-lasso by using the locally adaptive alternating direction method of multipliers and illustrate our proposal\u2019s practical advantages in simulation and in applications in finance and biology.\n\n#### Keywords.\n\naggregation, graphical model, high-dimensionality, regularization, sparsity\n\nIntroduction {#intro}\n============\n\nGraphical models are greatly useful for understanding the relationships among large numbers of variables." +"---\nabstract: 'In the past decade we have witnessed the failure of traditional polls in predicting presidential election outcomes across the world. To understand the reasons behind these failures we analyze the raw data of a trusted pollster which failed to predict, along with the rest of the pollsters, the surprising 2019 presidential election in Argentina which has led to a major market collapse in that country. Analysis of the raw and re-weighted data from longitudinal surveys performed before and after the elections reveals clear biases (beyond well-known low-response rates) related to mis-representation of the population and, most importantly, to social-desirability biases, i.e., the tendency of respondents to hide their intention to vote for controversial candidates. We then propose a longitudinal opinion tracking method based on big-data analytics from social media, machine learning, and network theory that overcomes the limits of traditional polls. The model achieves accurate results in the 2019 Argentina elections predicting the overwhelming victory of the candidate Alberto Fern\u00e1ndez over the president Mauricio Macri; a result that none of the traditional pollsters in the country was able to predict. Beyond predicting political elections, the framework we propose is more general and can be used to discover trends" +"---\nabstract: 'In this paper, we address the problem of uncertainty propagation through nonlinear stochastic dynamical systems. More precisely, given a discrete-time continuous-state probabilistic nonlinear dynamical system, we aim at finding the sequence of the moments of the probability distributions of the system states up to any desired order over the given planning horizon. Moments of uncertain states can be used in estimation, planning, control, and safety analysis of stochastic dynamical systems. Existing approaches to address moment propagation problems provide approximate descriptions of the moments and are mainly limited to particular set of uncertainties, e.g., Gaussian disturbances. In this paper, to describe the moments of uncertain states, we introduce trigonometric and also mixed-trigonometric-polynomial moments. Such moments allow us to obtain closed deterministic dynamical systems that describe the *exact* time evolution of the moments of uncertain states of an important class of autonomous and robotic systems including underwater, ground, and aerial vehicles, robotic arms and walking robots. Such obtained deterministic dynamical systems can be used, in a receding horizon fashion, to propagate the uncertainties over the planning horizon in *real-time*. To illustrate the performance of the proposed method, we benchmark our method against existing approaches including linear, unscented transformation, and sampling" +"---\nabstract: 'Peg-in-hole assembly is a challenging contact-rich manipulation task. There is no general solution to identify the relative position and orientation between the peg and the hole. In this paper, we propose a novel method to classify the contact poses based on a sequence of contact measurements. When the peg contacts the hole with pose uncertainties, a tilt-then-rotate strategy is applied, and the contacts are measured as a group of patterns to encode the contact pose. A convolutional neural network (CNN) is trained to classify the contact poses according to the patterns. In the end, an admittance controller guides the peg towards the error direction and finishes the peg-in-hole assembly. Simulations and experiments are provided to show that the proposed method can be applied to the peg-in-hole assembly of different geometries. We also demonstrate the ability to alleviate the sim-to-real gap.'\nauthor:\n- 'Shiyu Jin, Xinghao Zhu, Changhao Wang, and Masayoshi Tomizuka[^1]'\nbibliography:\n- 'bib.bib'\ntitle: 'Contact Pose Identification for Peg-in-Hole Assembly under Uncertainties '\n---\n\nIntroduction\n============\n\nRobotic peg-in-hole assembly has been studied for decades. It is challenging because it requires accurate state estimations of the peg and the hole for alignment, and a combination of precise planning" +"---\nabstract: 'We use a simulation-based modelling approach to analyse the anisotropic clustering of the BOSS LOWZ sample over the radial range $0.4 \\, {h^{-1} \\, \\mathrm{Mpc}}$ to $63 \\, {h^{-1} \\, \\mathrm{Mpc}}$, significantly extending what is possible with a purely analytic modelling framework. Our full-scale analysis yields constraints on the growth of structure that are a factor of two more stringent than any other study on large scales at similar redshifts. We infer $f \\sigma_8 = 0.471 \\pm 0.024$ at $z \\approx 0.25$, and $f \\sigma_8 = 0.431 \\pm 0.025$ at $z \\approx 0.40$; the corresponding $\\Lambda$CDM predictions of the Planck CMB analysis are $0.470 \\pm 0.006$ and $0.476 \\pm 0.005$, respectively. Our results are thus consistent with Planck, but also follow the trend seen in previous low-redshift measurements of $f \\sigma_8$ falling slightly below the $\\Lambda$CDM+CMB prediction. We find that small and large radial scales yield mutually consistent values of $f \\sigma_8$, but there are $1-2.5 \\sigma$ hints of small scales ($< 10 \\, {h^{-1} \\, \\mathrm{Mpc}}$) preferring lower values for $f \\sigma_8$ relative to larger scales. We analyse the constraining power of the full range of radial scales, finding that most of the multipole information about $f\\sigma_8$" +"---\nabstract: 'The ELLIS PhD program is a European initiative that supports excellent young researchers by connecting them to leading researchers in AI. In particular, PhD students are supervised by two advisors from different countries: an advisor and a co-advisor. In this work we summarize the procedure that, in its final step, matches students to advisors in the ELLIS 2020 PhD program. The steps of the procedure are based on the extensive literature of two-sided matching markets and the college admissions problem [@knuth1997stable; @gale1962college; @roth1992two]. We introduce [PolyGS]{}, an algorithm for the case of two-sided markets with quotas on both sides (also known as many-to-many markets) which we use throughout the selection procedure of pre-screening, interview matching and final matching with advisors. The algorithm returns a stable matching in the sense that no unmatched persons prefer to be matched together rather than with their current partners (given their indicated preferences). [@roth1984evolution] gives evidence that only stable matchings are likely to be adhered to over time. Additionally, the matching is student-optimal. Preferences are constructed based on the rankings each side gives to the other side and the overlaps of research fields. We present and discuss the matchings that the algorithm" +"---\nabstract: '*COVID-19 infection caused by SARS-CoV-2 pathogen is a catastrophic pandemic outbreak all over the world with exponential increasing of confirmed cases and, unfortunately, deaths. In this work we propose an AI-powered pipeline, based on the deep-learning paradigm, for automated COVID-19 detection and lesion categorization from CT scans. We first propose a new segmentation module aimed at identifying automatically lung parenchyma and lobes. Next, we combined such segmentation network with classification networks for COVID-19 identification and lesion categorization. We compare the obtained classification results with those obtained by three expert radiologists on a dataset consisting of 162 CT scans. Results showed a sensitivity of 90% and a specificity of 93.5% for COVID-19 detection, outperforming those yielded by the expert radiologists, and an average lesion categorization accuracy of over 84%. Results also show that a significant role is played by prior lung and lobe segmentation that allowed us to enhance performance by over 20 percent points. The interpretation of the trained AI models, moreover, reveals that the most significant areas for supporting the decision on COVID-19 identification are consistent with the lesions clinically associated to the virus, i.e., crazy paving, consolidation and ground glass. This means that the artificial models" +"---\nabstract: 'Tablut is a complete-knowledge, deterministic, and asymmetric board game, which has not been solved nor properly studied yet. In this work, its rules and characteristics are presented, then a study on its complexity is reported. An upper bound to its complexity is found eventually by dividing the state-space of the game into subspaces according to specific conditions. This upper bound is comparable to the one found for Draughts, therefore, it would seem that the open challenge of solving this game requires a considerable computational effort.'\nauthor:\n- Andrea Galassi\nbibliography:\n- 'biblio.bib'\ntitle: An Upper Bound on the Complexity of Tablut\n---\n\nIntroduction\n============\n\nTablut is a strategy board game belonging to the family of Tafl games, a group of Celtic and Nordic asymmetric board games designed for two players, which share similar rules. Tafl games (sometimes called Hnefatafl games) may derive from the Roman game Ludus latrunculorum, and have evolved into many different variants of the original game, such as Tablut, Brandubh, Hnefatafl, and Tawlbwrdd. The exact rules of these games are difficult to known, since little documentation has survived until the present days, and Tablut is probably the one for which most information is available.\n\nIndeed," +"---\nabstract: 'We present a method to mitigate the atmospheric effects (residual atmospheric lines) in single-dish radio spectroscopy caused by the elevation difference between the target and reference positions. The method is developed as a script using the Atmospheric Transmission at Microwaves (ATM) library built into the Common Astronomy Software Applications (CASA) package. We apply the method to the data taken with the Total Power Array of the Atacama Large Millimeter/submillimeter Array. The intensities of the residual atmospheric (mostly O$_3$) lines are suppressed by, typically, an order of magnitude for the tested cases. The parameters for the ATM model can be optimized to minimize the residual line and, for a specific O$_3$ line at 231.28 GHz, a seasonal dependence of a best-fitting model parameter is demonstrated. The method will be provided as a task within the CASA package in the near future. The atmospheric removal method we developed can be used by any radio/millimeter/submillimeter observatory to improve the quality of its spectroscopic measurements.'\nauthor:\n- Tsuyoshi Sawada\n- 'Chin-Shin Chang'\n- Harold Francke\n- Laura Gomez\n- 'Jeffrey G.\u00a0Mangum'\n- Yusuke Miyamoto\n- Takeshi Nakazato\n- Suminori Nishie\n- 'Neil M.\u00a0Phillips'\n- Yoshito Shimajiri\n- Kanako Sugimoto\nbibliography:" +"---\nabstract: |\n The limit of the entropy in the stochastic block model (SBM) has been characterized in the sparse regime for the special case of disassortative communities [@10.1145/3055399.3055420] and for the classical case of assortative communities but in the dense regime [@DAM15]. The problem has not been closed in the classical sparse and assortative case. This paper establishes the result in this case for any SNR besides for the interval $(1,3.513)$. It further gives an approximation to the limit in this window.\n\n The result is obtained by expressing the global SBM entropy as an integral of local tree entropies in a broadcasting on tree model with erasure side-information. The main technical advancement then relies on showing the irrelevance of the boundary in such a model, also studied with variants in\u00a0[@kanade2014global], [@Mossel_2016] and\u00a0[@mossel2015local]. In particular, we establish the uniqueness of the BP fixed point in the survey model for any SNR above 3.513 or below 1. This only leaves a narrow region in the plane between SNR and survey strength where the uniqueness of BP conjectured in these papers remains unproved.\nauthor:\n- 'Emmanuel Abbe [^1]'\n- 'Elisabetta Cornacchia [^2]'\n- 'Yuzhou Gu [^3]'\n- 'Yury Polyanskiy [^4]'" +"---\nabstract: 'Distributed quantum information processing is essential for building quantum networks and enabling more extensive quantum computations. In this regime, several spatially separated parties share a multipartite quantum system, and the most natural set of operations is Local Operations and Classical Communication (LOCC). As a pivotal part in quantum information theory and practice, LOCC has led to many vital protocols such as quantum teleportation. However, designing practical LOCC protocols is challenging due to LOCC\u2019s intractable structure and limitations set by near-term quantum devices. Here we introduce LOCCNet, a machine learning framework facilitating protocol design and optimization for distributed quantum information processing tasks. As applications, we explore various quantum information tasks such as entanglement distillation, quantum state discrimination, and quantum channel simulation. We discover protocols with evident improvements, in particular, for entanglement distillation with quantum states of interest in quantum information. Our approach opens up new opportunities for exploring entanglement and its applications with machine learning, which will potentially sharpen our understanding of the power and limitations of LOCC. An implementation of LOCCNet is available in Paddle Quantum, a quantum machine learning Python package based on PaddlePaddle deep learning platform.'\nauthor:\n- Xuanqiang Zhao\n- Benchi Zhao\n- Zihe Wang" +"---\nabstract: 'In Maurice Merleau-Ponty\u2019s phenomenology of perception, analysis of perception accounts for an element of intentionality, and in effect therefore, perception and action cannot be viewed as distinct procedures. In the same line of thinking, Alva No\u00eb considers perception as a thoughtful activity that relies on capacities for action and thought. Here, by looking into psychology as a source of inspiration, we propose a computational model for the action involved in visual perception based on the notion of equilibrium as defined by Jean Piaget. In such a model, Piaget\u2019s equilibrium reflects the mind\u2019s status, which is used to control the observation process. The proposed model is built around a modified version of convolutional neural networks (CNNs) with enhanced filter performance, where characteristics of filters are adaptively adjusted via a high-level control signal that accounts for the thoughtful activity in perception. While the CNN plays the role of the visual system, the control signal is assumed to be a product of mind.'\nauthor:\n- 'Aref\u00a0Hakimzadeh, Yanbo\u00a0Xue, and\u00a0Peyman\u00a0Setoodeh [^1] [^2]'\nbibliography:\n- 'IEEEabrv.bib'\n- 'Main.bib'\ntitle: 'Enacted Visual Perception: A Computational Model based on Piaget Equilibrium'\n---\n\nPiaget equilibrium, schema theory, visual perception, convolutional neural network.\n\nIntroduction" +"---\nabstract: 'Over the last two decades, financial systems have been studied and analysed from the perspective of complex networks, where the nodes and edges in the network represent the various financial components and the strengths of correlations between them. Here, we adopt a similar network-based approach to analyse the daily closing prices of 69 global financial market indices across 65 countries over a period of 2000-2014. We study the correlations among the indices by constructing threshold networks superimposed over minimum spanning trees at different time frames. We investigate the effect of critical events in financial markets (crashes and bubbles) on the interactions among the indices by performing both static and dynamic analyses of the correlations. We compare and contrast the structures of these networks during periods of crashes and bubbles, with respect to the normal periods in the market. In addition, we study the temporal evolution of traditional market indicators, various global network measures and the recently developed edge-based curvature measures. We show that network-centric measures can be extremely useful in monitoring the fragility in the global financial market indices.'\nauthor:\n- Areejit Samal\n- Sunil Kumar\n- Yasharth Yadav\n- Anirban Chakraborti\ntitle: 'Network-centric indicators for fragility in" +"---\nabstract: 'We study the fundamental problem of butterfly (i.e. (2,2)-bicliques) counting in bipartite streaming graphs. Similar to triangles in unipartite graphs, enumerating butterflies is crucial in understanding the structure of bipartite graphs. This benefits many applications where studying the cohesion in a graph shaped data is of particular interest. Examples include investigating the structure of computational graphs or input graphs to the algorithms, as well as dynamic phenomena and analytic tasks over complex real graphs. Butterfly counting is computationally expensive, and known techniques do not scale to large graphs; the problem is even harder in streaming graphs. In this paper, following a data-driven methodology, we first conduct an empirical analysis to uncover temporal organizing principles of butterflies in real streaming graphs and then we introduce an approximate adaptive window-based algorithm, sGrapp, for counting butterflies as well as its optimized version sGrapp-x. sGrapp is designed to operate efficiently and effectively over any graph stream with any temporal behavior. Experimental studies of sGrapp and sGrapp-x show superior performance in terms of both accuracy and efficiency.'\nauthor:\n- Aida Sheshbolouki\n- 'M. Tamer [\u00d6]{}zsu'\nbibliography:\n- 'main.bib'\ntitle: 'sGrapp: Butterfly Approximation in Streaming Graphs'\n---\n\nIntroduction {#sec:intro}\n============\n\nIn this paper we" +"---\nabstract: 'The design of the optimal inverse discrete cosine transform (IDCT) to compensate the quantization error is proposed for effective lossy image compression in this work. The forward and inverse DCTs are designed in pair in current image/video coding standards without taking the quantization effect into account. Yet, the distribution of quantized DCT coefficients deviate from that of original DCT coefficients. This is particularly obvious when the quality factor of JPEG compressed images is small. To address this problem, we first use a set of training images to learn the compound effect of forward DCT, quantization and dequantization in cascade. Then, a new IDCT kernel is learned to reverse the effect of such a pipeline. Experiments are conducted to demonstrate that the advantage of the new method, which has a gain of 0.11-0.30dB over the standard JPEG over a wide range of quality factors.'\nauthor:\n- \nbibliography:\n- 'refs.bib'\ntitle: |\n A Machine Learning Approach to Optimal Inverse\\\n Discrete Cosine Transform (IDCT) Design\n---\n\nIntroduction {#sec:introduction}\n============\n\nMany image and video compression standards have been developed in the last thirty years. Examples include JPEG [@wallace1992jpeg], JPEG2000 [@rabbani2002jpeg2000], and BPG [@bpg] for image compression and MPEG-1 [@brandenburg1994iso], MPEG-2 [@haskell1996digital], MPEG-4" +"---\nauthor:\n- Stefan Heidekr\u00fcger\n- Paul Sutterer\n- Nils Kohring\n- Maximilian Fichtl\n- Martin Bichler\nbibliography:\n- 'bibliography.bib'\ntitle: |\n Equilibrium Learning in Combinatorial Auctions:\\\n Computing Approximate Bayesian Nash Equilibria via Pseudogradient Dynamics\n---\n\nIntroduction\n============\n\nAuctions are widely used in advertising, procurement, or for spectrum sales [@bichler2017HandbookSpectrumAuction; @milgrom2017DiscoveringPricesAuction; @ashlagi2011SimultaneousAdAuctions]. Auction markets inherently involve incomplete information about competitors and strategic behavior of market participants. Understanding decision making in such markets has long been an important line of research in game theory. Auctions are typically modeled as Bayesian games and one is particularly interested in the equilibria of such games.\n\nIt is well-known that equilibrium computation is hard: Finding Nash equilibria is known to be PPAD-complete even for normal-form games, which assume complete information and finite action spaces, and where a Nash equilibrium is guaranteed to exist [@daskalakis2009ComplexityComputingNash]. In auction games modeled as Bayesian games with continuous type and action spaces, agents\u2019 values are drawn from some continuous prior value distribution and their strategies are described as continuous bid functions on these valuations. For markets of a single item, the landmark results by @vickrey1961CounterspeculationAuctionsCompetitive have enabled a deep understanding of common auction formats. For multi-item auctions and more specifically" +"**Microcavity phonon polaritons \u2013 from weak to ultra-strong phonon-photon coupling**\n\nMar\u00eda Barra-Burillo$^{1,*}$, Unai Muniain$^{2,*}$, Sara Catalano$^{1}$, Marta Autore$^{1}$, Felix Casanova$^{1,3}$, Luis E. Hueso$^{1,3}$, Javier Aizpurua$^{2,4}$, Ruben Esteban$^{2,4}$ and Rainer Hillenbrand$^{3,5}$\n\n$^1$*CIC nanoGUNE BRTA, 20018 Donostia-San Sebasti\u00e1n, Spain*\n\n$^2$*Donostia International Physics Center, 20018 Donostia-San Sebasti\u00e1n, Spain*\n\n$^3$*IKERBASQUE, Basque Foundation for Science, 45011 Bilbao, Spain*\n\n$^4$*Materials Physics Center, CSIC-UPV/EHU, 20018 Donostia-San Sebasti\u00e1n, Spain*\n\n$^5$*CIC nanoGUNE BRTA and EHU/UPV, 20018 Donostia-San Sebasti\u00e1n, Spain*\n\n\\*These authors contributed equally to this work.\n\nCorresponding author: r.hillenbrand@nanogune.eu\n\n**Abstract:** Strong coupling between molecular vibrations and microcavity modes has been demonstrated to modify physical and chemical properties of the molecular material. Here, we study the much less explored coupling between lattice vibrations (phonons) and microcavity modes. Embedding thin layers of hexagonal boron nitride (hBN) into classical microcavities, we demonstrate the evolution from weak to ultrastrong phonon-photon coupling when the hBN thickness is increased from a few nanometers to a fully filled cavity. Remarkably, strong coupling is achieved for hBN layers as thin as 10 nm. Further, the ultrastrong coupling in fully filled cavities yields a cavity polariton dispersion matching that of phonon polaritons in bulk hBN, highlighting that the maximum light-matter coupling in microcavities is limited to the coupling" +"---\nabstract: 'Monolayer black and blue phosphorenes possess electronic and optical properties that result in unique features when the two materials are stacked. We devise a low-strain van-der-Waals double layer and investigate its properties with [*ab initio*]{} many-body perturbation theory techniques. A type-II band alignment and optical absorption in the visible range are found. The study demonstrates that spatially indirect excitons with full charge separation can be obtained between two layers with the same elemental composition but different crystalline structure, proving the system interesting for further studies where dipolar excitons are important and for future opto-electronic applications.'\nauthor:\n- Michele Re Fiorentin\n- Giancarlo Cicero\n- Maurizia Palummo\ntitle: Spatially indirect excitons in black and blue phosphorene double layers\n---\n\n**Cite as:** Phys. Rev. Materials **4**, 074009 (2020), [doi.org/10.1103/PhysRevMaterials.4.074009](https://doi.org/10.1103/PhysRevMaterials.4.074009)\n\nIntroduction\n============\n\nOver the past decades, a wide range of two-dimensional (2D) materials have been synthesized in the wake of the crucial discovery of graphene [@geim]. 2D materials such as hexagonal boron nitride [@hBN], metal carbides and nitrides [@MXenes1; @MXenes_review], transition metal dichalcogenides (TMDs) [@TMDs_review] and single-element monolayers such as silicene [@silicene] and germanene [@germanene], have been under extensive experimental and theoretical scrutiny because of their remarkable physical, electronic and optical" +"---\nabstract: 'I study dynamic random utility with finite choice sets and exogenous total menu variation, which I refer to as stochastic utility (SU). First, I characterize SU when each choice set has three elements. Next, I prove several mathematical identities for joint, marginal, and conditional Block\u2013Marschak sums, which I use to obtain two characterizations of SU when each choice set but the last has three elements. As a corollary under the same cardinality restrictions, I sharpen an axiom to obtain a characterization of SU with full support over preference tuples. I conclude by characterizing SU without cardinality restrictions. All of my results hold over an arbitrary finite discrete time horizon.'\nauthor:\n- 'Ricky Li[^1]'\nbibliography:\n- 'BibFile.bib'\ndate: 'This version: June 20, 2022'\ntitle: Dynamic Random Choice\n---\n\nIntroduction and Related Literature\n===================================\n\nA classic result in decision theory is [@sen1971choice]\u2019s characterization of deterministic choice functions that can be represented by strict preference relations. However, economic choice data is often nondeterministic. In such cases, the analogous primitive and representation is a *stochastic* choice function (SCF) and *random* utility (RU) model. [@block1959random] define RU on arbitrary finite choice sets and show that their axiom requiring that the SCF\u2019s *Block\u2013Marschak sums*" +"---\nabstract: 'In this paper we develop an optimisation based approach to multivariate Chebyshev approximation on a finite grid. We consider two models: multivariate polynomial approximation and multivariate generalised rational approximation. In the second case the approximations are ratios of linear forms and the basis functions are not limited to monomials. It is already known that in the case of multivariate polynomial approximation on a finite grid the corresponding optimisation problems can be reduced to solving a linear programming problem, while the area of multivariate rational approximation is not so well understood. In this paper we demonstrate that in the case of multivariate generalised rational approximation the corresponding optimisation problems are quasiconvex. This statement remains true even when the basis functions are not limited to monomials. Then we apply a bisection method, which is a general method for quasiconvex optimisation. This method converges to an optimal solution with given precision. We demonstrate that the convex feasibility problems appearing in the bisection method can be solved using linear programming. Finally, we compare the deviation error and computational time for multivariate polynomial and generalised rational approximation with the same number of decision variables.'\n---\n\n[**[Multivariate approximation by polynomial and generalised rational functions.]{}**]{}" +"---\nabstract: 'The concept of spin ice can be extended to a general graph. We study the degeneracy of spin ice graph on arbitrary interaction structures via graph theory. Via the mapping of spin ices to the Ising model, we clarify whether the inverse mapping is possible via a modified Krausz construction. From the gauge freedom of frustrated Ising systems, we derive exact, general results about frustration and degeneracy. We demonstrate for the first time that every spin ice graph, with the exception of the 1D Ising model, is degenerate. We then study how degeneracy scales in size, using the mapping between Eulerian trails and spin ice manifolds, and a permanental identity for the number of Eulerian orientations. We show that the Bethe permanent technique provides both an estimate and a lower bound to the frustration of spin ices on arbitrary graphs of even degree. While such technique can be used also to obtain an upper bound, we find that in all the examples we studied but one, another upper bound based on Schrijver inequality is tighter.'\nauthor:\n- Francesco Caravelli\n- Michael Saccone\n- Cristiano Nisoli\ntitle: 'On the Degeneracy of Spin Ice Graphs, and Its Estimate via the" +"---\nabstract: 'We present the statistical characterization of a 2x2 Multiple-Input Multiple-Output wireless link operated in a mode-stirred enclosure, with channel state information available only at the receiver (agnostic transmitter). Our wireless channel measurements are conducted in absence of line of sight and varying the inter-element spacing between the two antenna elements in both the transmit and receive array. The mode-stirred cavity is operated: i) at a low number of stirrer positions to create statistical inhomogeneity; ii) at two different loading conditions, empty and with absorbers, in order to mimic a wide range of realistic equipment level enclosures. Our results show that two parallel channels are obtained within the confined space at both the operating conditions. The statistical characterization of the wireless channel is presented in terms of coherence bandwidth, path loss, delay spread and Rician factor, and wideband channel capacity. It is found that the severe multipath fading supported by a highly reflecting environment creates unbalance between the two Multiple-Input Multiple-Output channels, even in presence of substantial losses. Furthermore, the channel capacity has a multi-modal distribution whose average and variance scale monotonically with the number of absorbers. Results are of interest in IoT devices, including wireless chip-to-chip and device-to-device" +"---\nabstract: 'We prove discrete-to-continuum convergence of interaction energies defined on lattices in the Euclidean space (with interactions beyond nearest neighbours) to a crystalline perimeter, and we discuss the possible Wulff shapes obtainable in this way. Exploiting the \u201cmultigrid construction\u201d of quasiperiodic tilings (which is an extension of De Bruijn\u2019s \u201cpentagrid\u201d construction of Penrose tilings) we adapt the same techniques to also find the macroscopical homogenized perimeter when we microscopically rescale a given quasiperiodic tiling.'\nauthor:\n- 'Giacomo Del\u00a0Nin [^1]'\n- 'Mircea Petrache [^2]'\nbibliography:\n- 'quasicrystals.bib'\ntitle: Continuum limits of discrete isoperimetric problems and Wulff shapes in lattices and quasicrystal tilings\n---\n\n[**MSC (2020)**: 49Q20, 49J45 (primary); 49Q10, 52B11, 52C07, 52C22, 52C23 (secondary). **Keywords**: isoperimetric problem, Wulff shape, discrete-to-continuum, lattices, quasicrystals, Gamma convergence, homogenization. ]{}\n\nIntroduction\n============\n\nThe question of what crystal shapes are induced by what kind of interactions has preoccupied researches since the beginning of the field of crystallography. Mathematically, the study of crystal shapes has been first put on a firm ground within the continuum theory, starting with the work of Wulff [@wulff], later reformulated and extended by Herring [@herring] and others [@liebmann; @laue; @dinghas]; see also [@Tay78] and references therein, for the connection to" +"---\nabstract: 'Visual interpretability of Convolutional Neural Networks (CNNs) has gained significant popularity because of the great challenges that CNN complexity imposes to understanding their inner workings. Although many techniques have been proposed to visualize class features of CNNs, most of them do not provide a correspondence between inputs and the extracted features in specific layers. This prevents the discovery of stimuli that each layer responds better to. We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer. Exploring features in this class-agnostic manner allows for a greater focus on the feature extractor of CNNs. Our method uses a dual-objective activation maximization and distance minimization loss, without requiring a generator network nor modifications to the original model. This limits the number of FLOPs to that of the original network. We demonstrate the visualization quality on widely-used architectures.[^1]'\naddress: |\n Department of Information and Computing Sciences, Utrecht University, Utrecht, Netherlands\\\n a.g.stergiou@uu.nl\nbibliography:\n- 'refs.bib'\ntitle: 'The Mind\u2019s Eye: Visualizing Class-Agnostic Features of CNNs'\n---\n\nFeature visualization, CNN explainability, convolutional features\n\nIntroduction {#sec:intro}\n============\n\n![**Top 50 extracted features**. ResNet-50 [@he2016deep] was used with features" +"---\nabstract: 'We study the benefits of complex-valued weights for neural networks. We prove that shallow complex neural networks with quadratic activations have no spurious local minima. In contrast, shallow real neural networks with quadratic activations have infinitely many spurious local minima under the same conditions. In addition, we provide specific examples to demonstrate that complex-valued weights turn poor local minima into saddle points. The activation function $\\mathbb{C}$ReLU is also discussed to illustrate the superiority of analytic activations in complex-valued neural networks.'\nauthor:\n- 'Xingtu Liu [^1]'\nbibliography:\n- 'biblio.bib'\ntitle: 'Neural Networks with Complex-Valued Weights Have No Spurious Local Minima'\n---\n\nIntroduction\n============\n\nNeural networks have seen great success empirically, which has inspired a large amount of theoretical work. However, neural networks lack rigorous mathematical foundations to explain their success at the moment. Optimization, among other aspects of deep learning theory, has received considerable attention in recent years. One challenge in deep learning is to avoid gradient descent being stuck at poor local minima. Thus, analyzing the optimization landscape of neural networks has been a major subject of study. In fact, almost all non-linear real-valued neural networks are shown to have poor local minima, which includes neural networks with" +"---\nabstract: 'We present high-resolution Magellan/MIKE spectra of 22 bright ($9