diff --git "a/https:/huggingface.co/datasets/iamgroot42/mimir/tree/main/test/temporal_arxiv_2020_08.jsonl" "b/https:/huggingface.co/datasets/iamgroot42/mimir/tree/main/test/temporal_arxiv_2020_08.jsonl" new file mode 100644--- /dev/null +++ "b/https:/huggingface.co/datasets/iamgroot42/mimir/tree/main/test/temporal_arxiv_2020_08.jsonl" @@ -0,0 +1,970 @@ +"---\nabstract: 'Rigidity percolation (RP) is the emergence of mechanical stability in networks. Motivated by the experimentally observed fractal nature of materials like colloidal gels and disordered fiber networks, we study RP in a fractal network where intrinsic correlations in particle positions is controlled by the fractal iteration. Specifically, we calculate the critical packing fractions of site-diluted lattices of Sierpi\u0144ski gaskets (SG\u2019s) with varying degrees of fractal iteration. Our results suggest that although the correlation length exponent and fractal dimension of the RP of these lattices are identical to that of the regular triangular lattice, the critical volume fraction is dramatically lower due to the fractal nature of the network. Furthermore, we develop a simplified model for an SG lattice based on the fragility analysis of a single SG. This simplified model provides an upper bound for the critical packing fractions of the full fractal lattice, and this upper bound is strictly obeyed by the disorder averaged RP threshold of the fractal lattices. Our results characterize rigidity in ultra-low-density fractal networks.'\nauthor:\n- Shae Machlus\n- Shang Zhang\n- Xiaoming Mao\nbibliography:\n- 'fractalRP.bib'\ntitle: Correlated rigidity percolation in fractal lattices\n---\n\nINTRODUCTION\n============\n\nSoft disordered solids are ubiquitous; they" +"---\nabstract: 'An important yet largely unsolved problem in the statistical mechanics of disordered quantum systems is to understand how quenched disorder affects quantum phase transitions in systems of itinerant fermions. In the clean limit, continuous quantum phase transitions of the symmetry-breaking type in Dirac materials such as graphene and the surfaces of topological insulators are described by relativistic (2+1)-dimensional quantum field theories of the Gross-Neveu-Yukawa (GNY) type. We study the universal critical properties of the chiral Ising, XY, and Heisenberg GNY models perturbed by quenched random-mass disorder, both uncorrelated or with long-range power-law correlations. Using the replica method combined with a controlled triple epsilon expansion below four dimensions, we find a variety of new finite-randomness critical and multicritical points with nonzero Yukawa coupling between low-energy Dirac fields and bosonic order parameter fluctuations, and compute their universal critical exponents. Analyzing bifurcations of the renormalization-group flow, we find instances of the fixed-point annihilation scenario\u2014continuously tuned by the power-law exponent of long-range disorder correlations and associated with an exponentially large crossover length\u2014as well as the transcritical bifurcation and the supercritical Hopf bifurcation. The latter is accompanied by the birth of a stable limit cycle on the critical hypersurface, which represents the first" +"---\nabstract: 'This work proposes DeepFolio, a new model for deep portfolio management based on data from limit order books (LOB). DeepFolio solves problems found in the state-of-the-art for LOB data to predict price movements. Our evaluation consists of two scenarios using a large dataset of millions of time series. The improvements deliver superior results both in cases of abundant as well as scarce data. The experiments show that DeepFolio outperforms the state-of-the-art on the benchmark FI-2010 LOB. Further, we use DeepFolio for optimal portfolio allocation of crypto-assets with rebalancing. For this purpose, we use two loss-functions - Sharpe ratio loss and minimum volatility risk. We show that DeepFolio outperforms widely used portfolio allocation techniques in the literature.'\nauthor:\n- \nbibliography:\n- 'bib.bib'\ntitle: 'DeepFolio: Convolutional Neural Networks for Portfolios with Limit Order Book Data [^1] '\n---\n\nInvestment Portfolios, Big Data Mining, Cryptoassets, Convolutional Neural Networks\n\nIntroduction {#submission}\n============\n\nMore than half of the financial world uses electronic Limit Order Books (LOBs). LOBS are a store of records of all transactions, [@rosu2010liquidity], [@parlour2008limit]. A limit order is a request to transact with a financial instrument at a price not exceeding a threshold, [@murphy1986technical]. Usually, traders set so-called buy limit" +"---\nabstract: |\n Quadratic discriminant analysis (QDA) is a widely used classification technique. Based on a training dataset, each class in the data is characterized by an estimate of its center and shape, which can then be used to assign unseen observations to one of the classes. The traditional QDA rule relies on the empirical mean and covariance matrix. Unfortunately, these estimators are sensitive to label and measurement noise which often impairs the model\u2019s predictive ability. Robust estimators of location and scatter are resistant to this type of contamination. However, they have a prohibitive computational cost for large scale industrial experiments. We present a novel QDA method based on a recent real-time robust algorithm. We additionally integrate an anomaly detection step to classify the most atypical observations into a separate class of outliers. Finally, we introduce the label bias plot, a graphical display to identify label and measurement noise in the training data. The performance of the proposed approach is illustrated in a simulation study with huge datasets, and on real datasets about diabetes and fruit.\\\nauthor:\n- |\n Iwein Vranckx, Jakob Raymaekers, Bart De Ketelaere,\\\n Peter J. Rousseeuw, Mia Hubert\\\n \\\n KU Leuven, BE-3001 Heverlee, Belgium\ndate: 'November 10," +"---\nabstract: 'Graphs have become increasingly popular in modeling structures and interactions in a wide variety of problems during the last decade. Graph-based clustering and semi-supervised classification techniques have shown impressive performance. This paper proposes a graph learning framework to preserve both the local and global structure of data. Specifically, our method uses the self-expressiveness of samples to capture the global structure and adaptive neighbor approach to respect the local structure. Furthermore, most existing graph-based methods conduct clustering and semi-supervised classification on the graph learned from the original data matrix, which doesn\u2019t have explicit cluster structure, thus they might not achieve the optimal performance. By considering rank constraint, the achieved graph will have exactly $c$ connected components if there are $c$ clusters or classes. As a byproduct of this, graph learning and label inference are jointly and iteratively implemented in a principled way. Theoretically, we show that our model is equivalent to a combination of kernel k-means and k-means methods under certain condition. Extensive experiments on clustering and semi-supervised classification demonstrate that the proposed method outperforms other state-of-the-art methods.'\naddress:\n- 'School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China.'\n- 'College" +"---\nabstract: 'In diarization, the PLDA is typically used to model an inference structure which assumes the variation in speech segments be induced by various speakers. The speaker variation is then learned from the training data. However, human perception can differentiate speakers by age, gender, among other characteristics. In this paper, we investigate a speaker-type informed model that explicitly captures the known variation of speakers. We explore a mixture of three PLDA models, where each model represents an adult female, male, or child category. The weighting of each model is decided by the prior probability of its respective class, which we study. The evaluation is performed on a subset of the BabyTrain corpus. We examine the expected performance gain using the oracle speaker type labels, which yields an 11.7% DER reduction. We introduce a novel baby vocalization augmentation technique and then compare the mixture model to the single model. Our experimental result shows an effective 0.9% DER reduction obtained by adding vocalizations. We discover empirically that a balanced dataset is important to train the mixture PLDA model, which outperforms the single PLDA by 1.3% using the same training data and achieving a 35.8% DER. The same setup improves over a" +"---\nabstract: 'Extraordinarily transmitting arrays are promising candidates for quasi-optical (QO) components due to their high frequency selectivity and beam scanning capabilities owing to the leaky-wave mechanism involved. We show here how by breaking certain unit cell and lattice symmetries, one can achieve a rich family of transmission resonances associated with the leaky-wave dispersion along the surface of the array. By combining two dimensional and one dimensional periodic Method of Moments (MoM) calculations with QO Terahertz (THz) time-domain measurements, we provide physical insight, numerical and experimental demonstration of the different mechanisms involved in the resonances associated with the extraordinary transmission peaks and how these evolve with the number of slots. Thanks to the THz instrument used, we are also able to explore the time-dependent emission of the different frequency components involved.'\nauthor:\n- 'Miguel Camacho, , Ajla Nekovic, Suzanna Freer, Pavel Penchev, Rafael R. Boix, , Stefan Dimov and Miguel Navarro-C\u00eda, [^1] [^2] [^3] [^4] [^5] [^6]'\nbibliography:\n- 'library.bib'\ntitle: 'Symmetry and Finite-size Effects in Quasi-optical Extraordinarily THz Transmitting Arrays of Tilted Slots'\n---\n\nExtraordinary transmission, frequency selective surface, method of moments, quasi-optics, terahertz, time-domain spectrometer.\n\nIntroduction {#sec:introduction}\n============\n\nthe early 1990s, the extraordinary optical tranmission (EOT) phenomenon through" +"---\nabstract: '$E$-theory was originally defined concretely by Connes and Higson\u00a0[@CH] and further work followed this construction. We generalise the definition to $C^\\ast$-categories. $C^\\ast$-categories were formulated to give a theory of operator algebras in a categorical picture and play important role in the study of mathematical physics. In this context, they are analogous to $C^\\ast$-algebras and so have invariants defined coming from $C^\\ast$-algebra theory but they do not yet have a definition of $E$-theory. Here we define $E$-theory for both complex and real graded $C^\\ast$-categories and prove it has similar properties to $E$-theory for $C^\\ast$-algebras.'\nauthor:\n- 'Sarah L. Browne and Paul D. Mitchener'\nbibliography:\n- 'data.bib'\ntitle: '$E$-theory for $C^\\ast$-categories'\n---\n\n[ ]{}\n\nIntroduction\n============\n\nThroughout this article, we use ${\\mathbb{F}}$ to denote either the field of real numbers ${\\mathbb{R}}$ or the field of complex numbers ${\\mathbb{C}}$. $C^\\ast$-categories are analogous to $C^\\ast$-algebras but give a much more general framework which do not require a choice of Hilbert space. In particular, we can consider all Hilbert spaces and the collection of all bounded linear operators on them simultaneously, and we can view a $C^\\ast$-algebra as a one object $C^\\ast$-category.\n\n$E$-theory is an invariant for $C^\\ast$-algebras. $K$-theory for $C^\\ast$-algebras was" +"---\nabstract: 'In this paper, we develop asymptotic theories for a class of latent variable models for large-scale multi-relational networks. In particular, we establish consistency results and asymptotic error bounds for the (penalized) maximum likelihood estimators when the size of the network tends to infinity. The basic technique is to develop a non-asymptotic error bound for the maximum likelihood estimators through large deviations analysis of random fields. We also show that these estimators are nearly optimal in terms of minimax risk.'\nauthor:\n- 'Zhi Wang, Xueying Tang and Jingchen Liu'\nbibliography:\n- 'mrg.bib'\ntitle: 'Statistical Analysis of Multi-Relational Network Recovery'\n---\n\nIntroduction {#sec:intro}\n============\n\nA multi-relational network (MRN) describes multiple relations among a set of entities simultaneously. Our work on MRNs is mainly motivated by its applications to knowledge bases that are repositories of information. Examples of knowledge bases include WordNet [@miller1995wordnet], Unified Medical Language System [@mccray2003upper], and Google Knowledge Graph (). They have been used as the information source in many natural language processing tasks such as word-sense disambiguation and machine translation [@gabrilovich2009wikipedia; @scott1999feature; @ferrucci2010building]. A knowledge base often includes knowledge on a large number of real-world objects or concepts. When a knowledge base is characterized by MRN, the" +"---\nabstract: 'Fluorescence microscopy allows for a detailed inspection of cells, cellular networks, and anatomical landmarks by staining with a variety of carefully-selected markers visualized as color channels. Quantitative characterization of structures in acquired images often relies on automatic image analysis methods. Despite the success of deep learning methods in other vision applications, their potential for fluorescence image analysis remains underexploited. One reason lies in the considerable workload required to train accurate models, which are normally specific for a given combination of markers, and therefore applicable to a very restricted number of experimental settings. We herein propose *Marker Sampling and Excite* \u2014 a neural network approach with a modality sampling strategy and a novel attention module that together enable ($i$)\u00a0flexible training with heterogeneous datasets with combinations of markers and ($ii$)\u00a0successful utility of learned models on arbitrary subsets of markers prospectively. We show that our single neural network solution performs comparably to an upper bound scenario where an ensemble of many networks is na\u00efvely trained for each possible marker combination separately. In addition, we demonstrate the feasibility of this framework in high-throughput biological analysis by revising a recent quantitative characterization of bone marrow vasculature in 3D confocal microscopy datasets" +"---\nabstract: |\n This note is my comment on Glenn Shafer\u2019s discussion paper \u201cTesting by betting\u201d [@Shafer:2020-local], together with two online appendices comparing p-values and betting scores.\n\n The version of this note at (Working Paper 8) is updated most often.\nauthor:\n- 'Vladimir Vovk [Department of Computer Science, Royal Holloway, University of London, Egham, Surrey, UK. E-mail: .]{}'\ntitle: 'Comment on Glenn Shafer\u2019s \u201cTesting by betting\u201d'\n---\n\nMain comment {#main-comment .unnumbered}\n============\n\nGlenn Shafer\u2019s paper is a powerful appeal for a wider use of betting ideas and intuitions in statistics. He admits that p-values will never be completely replaced by betting scores, and I discuss it further in Appendix\u00a0A (one of the online appendices, also including Appendix\u00a0G and [@Vovk:B], that I have prepared to meet the word limit). Both p-values and betting scores generalize Cournot\u2019s principle [@Shafer:2007-local], but they do it in their different ways, and both ways are interesting and valuable.\n\nOther authors have referred to betting scores as Bayes factors [@Shafer/etal:2011] and e-values [@Vovk/Wang:arXiv1912a-local; @Grunwald/etal:arXiv1906]. For simple null hypotheses, betting scores and Bayes factors indeed essentially coincide [@Grunwald/etal:arXiv1906 Section\u00a01, interpretation 3], but for composite null hypotheses they are different notions, and using \u201cBayes factor\u201d" +"---\nabstract: 'This note considers the notion of divergence-preserving branching bisimilarity. It briefly surveys results pertaining to the notion that have been obtained in the past one-and-a-half decade, discusses its role in the study of expressiveness of process calculi, and concludes with some suggestions for future work.'\nauthor:\n- Bas Luttik\nbibliography:\n- 'dpbb.bib'\ntitle: 'Divergence-Preserving Branching Bisimilarity'\n---\n\nIntroduction\n============\n\n*Branching bisimilarity* was proposed by van Glabbeek and Weijland as an upgrade of (strong) bisimilarity that facilitates abstraction from internal activity [@GW96]. It preserves the branching structure of processes more strictly than Milner\u2019s *observation equivalence* [@Mil80], which, according to van Glabbeek and Weijland, makes it, e.g., better suited for verification purposes. A case in point is the argument by Graf and Sifakis that there is no temporal logic with an *eventually* operator that is adequate for observation equivalence in the sense that two processes satisfy the same formulas if, and only if, they are observationally equivalent [@GS87]. The crux is that observation equivalence insufficiently takes into account the intermediate states of an internal computation. Indeed, branching bisimilarity requires a stronger correspondence between the intermediate states of an internal computation.\n\nBranching bisimilarity is also not compatible with a temporal logic" +"---\nabstract: 'In current deep learning paradigms, local training or the *Standalone* framework tends to result in overfitting and thus poor generalizability. This problem can be addressed by *Distributed* or *Federated Learning* (FL) that leverages a parameter server to aggregate model updates from individual participants. However, most existing Distributed or FL frameworks have overlooked an important aspect of participation: collaborative fairness. In particular, all participants can receive the same or similar models, regardless of their contributions. To address this issue, we investigate the collaborative fairness in FL, and propose a novel *Collaborative Fair Federated Learning* (CFFL) framework which utilizes reputation to enforce participants to converge to different models, thus achieving fairness without compromising the predictive performance. Extensive experiments on benchmark datasets demonstrate that CFFL achieves high fairness, delivers comparable accuracy to the *Distributed* framework, and outperforms the *Standalone* framework. Our code is available on [github](https://github.com/XinyiYS/CollaborativeFairFederatedLearning).'\nauthor:\n- 'Lingjuan Lyu$^{1*}$'\n- |\n Xinyi Xu$^{1*}$ Qian Wang$^{3*}$ $^1$Department of Computer Science, National University of Singapore, Singapore\\\n $^3$School of Cyber Science and Engineering, Wuhan University\\\n Corresponding to: lyulj@comp.nus.edu.sg, xuxinyi@comp.nus.edu.sg, qianwang@whu.edu.cn.\nbibliography:\n- 'biblio.bib'\ntitle: Collaborative Fairness in Federated Learning\n---\n\nIntroduction {#sec:introduction}\n============\n\nTraining complex deep neural networks on large-scale datasets is computationally" +"---\nabstract: |\n We study two variants of the shortest path problem. Given an integer $k$, the *$k$-color-constrained* and the *$k$-interchange-constrained* shortest path problems, respectively seek a shortest path that uses no more than $k$ colors and one that makes no more than $k-1$ alternations of colors. We show that the former problem is NP-hard, when the latter is tractable. The study of these problems is motivated by some limitations in the use of diameter-based metrics to evaluate the topological structure of transit networks. We notably show that indicators such as the diameter or directness of a transit network fail to adequately account for travel convenience in measuring the connectivity of a network and propose a new network indicator, based on solving the *$k$-interchange-constrained* shortest path problem, that aims at alleviating these limitations.\\\n **Keywords:** Graph Theory, Shortest Path Problem, Computational Complexity, Transit Networks, Network Indicators.\naddress: |\n Business Administration Division,\\\n Mahidol University International College\\\n Salaya, 73170, Thailand\\\n Nassim.deh@mahidol.ac.th \nauthor:\n- Nassim Dehouche\ntitle: 'The $k$-interchange-constrained diameter of a transit network: A connectedness indicator that accounts for travel convenience'\n---\n\nAcknowledgement {#acknowledgement .unnumbered}\n===============\n\nThe author would like to thank Dr. Yuval Filmus for his helpful advice, as well as two" +"---\nauthor:\n- Debajyoti Sarkar\n- and Manus Visser\nbibliography:\n- 'Notes-new.bib'\ntitle: The first law of differential entropy and holographic complexity \n---\n\nIntroduction {#sec:intro}\n============\n\nDeriving gravitational thermodynamics of black holes [@Bekenstein:1973ur; @Bardeen:1973gs; @Hawking:1974sw] from a microscopic perspective remains one of the guiding principles in the quest for quantum gravity. The microscopic state counting of black hole entropy [@Strominger:1996sh] is considered to be one of the major successes of string theory. Later, this microscopic derivation of black hole entropy was reinterpreted\u00a0[@Strominger:1997eq] in terms of the Anti-de Sitter (AdS)/ Conformal Field Theory (CFT) correspondence\u00a0[@Maldacena:1997re], where the entropy of three-dimensional AdS black holes [@Banados:1992wn; @Banados:1992gq] matches with the thermodynamic entropy in two-dimensional CFTs [@Cardy:1986ie]. In higher dimensions, it has also been argued that the mass, entropy and temperature of AdS black holes can be identified with the energy, entropy and temperature of a thermal state in the dual CFT at high temperature\u00a0[@Witten:1998zw].\n\nFurthermore, the correspondence between gravitational entropy and CFT entropy can be extended to the entanglement entropy of subregions on the conformal boundary of AdS. The Ryu-Takayagani (RT) formula [@Ryu:2006bv; @Ryu:2006ef] states that the entanglement entropy of a subregion\u00a0$\\mathcal R$ in the CFT is, to leading" +"---\nabstract: 'The spatial distribution of population and activities within urban areas, or urban form at the mesoscopic scale, is the outcome of multiple antagonist processes. We propose in this paper to benchmark different models of urban morphogenesis, to systematically compare the urban forms they can produce. Different types of approaches are included, such as a reaction-diffusion model, a gravity-based model, and correlated percolation. Applying a diversity search algorithm, we estimate the feasible space of each model within a space of urban form indicators, in comparison of empirical values for worldwide urban areas. We find a complementarity of the different types of processes, advocating for a plurality of urban models.'\nauthor:\n- |\n Juste Raimbault^1,2,3,\\*^\\\n ^1^ Center for Advanced Spatial Analysis, University College London\\\n ^2^ UPS CNRS 3611 ISC-PIF\\\n ^3^ UMR CNRS 8504 G[\u00e9]{}ographie-cit[\u00e9]{}s\\\n \\* juste.raimbault@polytechnique.edu\ntitle: A comparison of simple models for urban morphogenesis\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nUnderstanding the dynamics of cities is an increasing issue for sustainability, since the proportion of the world population expected to live in cities will grow to a large majority in the next decades, and that cities combine both positive and negative externalities on most aspects. Their complexity implies that quantitative" +"---\nabstract: '[ **Abstract.**]{} We compute hybrid static potentials in SU(2) lattice gauge theory using a multilevel algorithm and three different small lattice spacings. The resulting static potentials, which are valid for quark-antiquark separations as small as $0.05\\, \\text{fm}$, are important e.g.\u00a0when computing masses of heavy hybrid mesons in the Born-Oppenheimer approximation. We also discuss and exclude possible systematic errors from topological freezing, the finite lattice volume and glueball decays.'\nauthor:\n- |\n Carolin Riehl[^1] , Marc Wagner\\\n Goethe-Universit\u00e4t Frankfurt, Institut f\u00fcr Theoretische Physik, Max-von-Laue-Stra[\u00df]{}e 1,\\\n D-60438 Frankfurt am Main, Germany\\\n Helmholtz Research Academy Hesse for FAIR, Campus Riedberg, Max-von-Laue-Stra[\u00df]{}e 12,\\\n D-60438 Frankfurt am Main, Germany\ntitle: 'Hybrid static potentials in SU(2) lattice gauge theory at short quark-antiquark separations'\n---\n\nIntroduction\n============\n\nHybrid static potentials represent the energy of an excited gluon field surrounding a static quark and a static antiquark as a function of their separation and are, thus, related to heavy hybrid mesons. Due to the gluonic excitations, quantum numbers of hybrid mesons can be different from those predicted by the constituent quark model. The investigation of exotic mesons like hybrid mesons and tetraquarks are currently a very active field of research, both theoretically and experimentally (for" +"---\nabstract: 'In recent years, misinformation on the Web has become increasingly rampant. The research community has responded by proposing systems and challenges, which are beginning to be useful for (various subtasks of) detecting misinformation. However, most proposed systems are based on deep learning techniques which are fine-tuned to specific domains, are difficult to interpret and produce results which are not machine readable. This limits their applicability and adoption as they can only be used by a select expert audience in very specific settings. In this paper we propose an architecture based on a core concept of Credibility Reviews (CRs) that can be used to build networks of distributed bots that collaborate for misinformation detection. The CRs serve as building blocks to compose graphs of (i) web content, (ii) existing credibility signals \u2013fact-checked claims and reputation reviews of websites\u2013, and (iii) automatically computed reviews. We implement this architecture on top of lightweight extensions to Schema.org and services providing generic NLP tasks for semantic similarity and stance detection. Evaluations on existing datasets of social-media posts, fake news and political speeches demonstrates several advantages over existing systems: extensibility, domain-independence, composability, explainability and transparency via provenance. Furthermore, we obtain competitive results without requiring" +"---\nabstract: 'We study the spectrum of the Laplacian on the hemisphere with Robin boundary conditions. It is found that the eigenvalues fall into small clusters around the Neumann spectrum, and satisfy a Szeg\u0151 type limit theorem. Sharp upper and lower bounds for the gaps between the Robin and Neumann eigenvalues are derived, showing in particular that these are unbounded. Further, it is shown that except for a systematic double multiplicity, there are no multiplicities in the spectrum as soon as the Robin parameter is positive, unlike the Neumann case which is highly degenerate. Finally, the limiting spacing distribution of the desymmetrized spectrum is proved to be the delta function at the origin.'\naddress:\n- 'School of Mathematical Sciences, Tel Aviv University, Tel Aviv 69978, Israel'\n- 'Department of Mathematics, King\u2019s College London, UK'\nauthor:\n- Ze\u00e9v Rudnick and Igor Wigman\ntitle: On the Robin spectrum for the hemisphere\n---\n\n\\[section\\] \\[thm\\][Lemma]{} \\[thm\\][Proposition]{} \\[thm\\][Corollary]{} \\[thm\\][Project]{}\n\n[^1]\n\nIntroduction\n============\n\nThe Robin problem\n-----------------\n\nLet $\\Omega $ be the upper unit hemisphere (Figure\u00a0\\[fig:hemisphere\\]), with its boundary $\\partial \\Omega$ the equator. Our goal is to study the Robin boundary problem on the hemisphere $\\Omega$: $$\\Delta F+\\lambda F=0, \\quad \\frac{\\partial F}{\\partial n} +\\sigma" +"---\nabstract: 'We study five Luminous Blue Variable (LBV) candidates in the Andromeda galaxy and one more (MN112) in the Milky Way. We obtain the same-epoch near-infrared (NIR) and optical spectra on the 3.5-meter telescope at the Apache Point Observatory and on the 6-meter telescope of the SAO RAS. The candidates show typical LBV features in their spectra: broad and strong hydrogen lines, , , and \\[\\] lines. We estimate the temperatures, reddening, radii and luminosities of the stars using their spectral energy distributions. Bolometric luminosities of the candidates are similar to those of known LBV stars in the Andromeda galaxy. One candidate, J004341.84+411112.0, demonstrates photometric variability (about 0.27mag in *V* band), which allows us to classify it as a LBV. The star J004415.04+420156.2 shows characteristics typical for B\\[e\\]-supergiants. The star J004411.36+413257.2 is classified as FeII star. We confirm that the stars J004621.08+421308.2 and J004507.65+413740.8 are warm hypergiants. We for the first time obtain NIR spectrum of the Galactic LBV candidate MN112. We use both optical and NIR spectra of MN112 for comparison with similar stars in M31 and notice identical spectra and the same temperature in the J004341.84+411112.0. This allows us to confirm that MN112 is a LBV, which" +"---\nabstract: 'A self-dual map $G$ is said to be [*antipodally self-dual*]{} if the dual map $G^*$ is antipodal embedded in ${\\mathbb{S}^2}$ with respect to $G$. In this paper, we investigate necessary and/or sufficient conditions for a map to be antipodally self-dual. In particular, we present a combinatorial characterization for map $G$ to be antipodally self-dual in terms of certain [*involutive labelings*]{}. The latter lead us to obtain necessary conditions for a map to be [*strongly involutive*]{} (a notion relevant for its connection with convex geometric problems). We also investigate the relation of antipodally self-dual maps and the notion of [*antipodally symmetric*]{} maps. It turns out that the latter is a very helpful tool to study questions concerning the [*symmetry*]{} as well as the [*amphicheirality*]{} of [*links*]{}.'\naddress:\n- 'Instituto de Matem\u00e1ticas, Universidad Nacional A. de M\u00e9xico at Quer\u00e9taro Quer\u00e9taro, M\u00e9xico, CP. 07360'\n- ' UMI2924 - J.-C. Yoccoz, CNRS-IMPA, Brazil and Univ.\u00a0Montpellier, France '\n- 'IMAG, Univ.\u00a0Montpellier, CNRS, Montpellier, France'\nauthor:\n- Luis Montejano$^1$\n- 'Jorge L. Ram\u00edrez Alfons\u00edn$^2$'\n- Ivan Rasskin\ntitle: 'Self-dual Maps I : antipodality'\n---\n\n[^1] [^2]\n\nIntroduction\n============\n\nLet $G$ be a [*map*]{}, that is, a graph cellularly embedded in the sphere." +"---\nabstract: 'The task of spatial-temporal action detection has attracted attention among researchers. dominant methods solve this problem by relying on short-term information and detection on each frames or clips. these methods showed of long-term information and inefficiency. In this paper, in a sparse-to-dense manner. There are characteristics in this framework: (1) Both long-term and short-term sampled information are explicitly utilized in our spatio-temporal network, (2) A dynamic feature sampling module (DTS) is designed to effectively approximate the tube output while keeping the system . We of our model on the UCF101-24, JHMDB-21 and UCFSports datasets, promising results competitive to state-of-the-art methods. our framework'\nauthor:\n- |\n **Yuxi Li^1,\\ 2^, Weiyao Lin^1,\\ 2^[^1], Tao Wang^1^, John See^3^**\\\n **Rui Qian^1^, Ning Xu^4^, Limin Wang^5^, Shugong Xu^2^**\\\n ^1^School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, China\\\n ^2^Shanghai Institute for Advanced Communication and Data Science, Shanghai University, China\\\n ^3^Multimedia University, Malaysia\\\n ^4^Adobe Research, USA\\\n ^5^State Key Laboratory for Novel Software Technology, Nanjing University, China\\\nbibliography:\n- 'egbib.bib'\ntitle: 'Finding Action Tubes with a Sparse-to-Dense Framework'\n---\n\nIntroduction\n============\n\nSpatial-temporal action detection is an essential technology for video understanding applications. In contrast to action recognition or temporal localization, where a video-level" +"---\nabstract: 'Real-world complex systems often comprise many distinct types of elements as well as many more types of networked interactions between elements. When the relative abundances of types can be measured well, we further observe heavy-tailed categorical distributions for type frequencies. For the comparison of type frequency distributions of two systems or a system with itself at different time points in time\u2014a facet of allotaxonometry\u2014a great range of probability divergences are available. Here, we introduce and explore \u2018probability-turbulence divergence\u2019, a tunable, straightforward, and interpretable instrument for comparing normalizable categorical frequency distributions. We model probability-turbulence divergence (PTD) after rank-turbulence divergence (RTD). While probability-turbulence divergence is more limited in application than rank-turbulence divergence, it is more sensitive to changes in type frequency. We build allotaxonographs to display probability turbulence, incorporating a way to visually accommodate zero probabilities for \u2018exclusive types\u2019 which are types that appear in only one system. We explore comparisons of example distributions taken from literature, social media, and ecology. We show how probability-turbulence divergence either explicitly or functionally generalizes many existing kinds of distances and measures, including, as special cases, $L^{(p)}$ norms, the S[\u00f8]{}rensen-Dice coefficient (the $F_1$ statistic), and the Hellinger distance. We discuss similarities with the generalized" +"---\nabstract: 'Superconducting stacks and bulks can act as very strong magnets (more than 17 T), but they lose their magnetization in the presence of alternating (or ripple) transverse magnetic fields, due to the dynamic magneto-resistance. This demagnetization is a major concern for applications requiring high run times, such as motors and generators, where ripple fields are of high amplitude and frequency. We have developed a numerical model based on dynamic magneto-resistance that is much faster than the conventional Power-Law-resistivity model, enabling us to simulate high number of cycles with the same accuracy. We simulate demagnetization behavior of superconducting stacks made of 10-100 tapes for up to 2 million cycles of applied ripple field. We found that for high number of cycles, the trapped field reaches non-zero stationary values for both superconducting bulks and stacks; as long as the ripple field amplitudes are below the parallel penetration field, being determined by the penetration field for a single tape in stacks. Bulks keep substantial stationary values for much higher ripple field amplitudes than the stacks, being relevant for high number of cycles. However, for low number of cycles, stacks lose much less magnetization as compared to bulks.'\nauthor:\n- |\n Anang" +"---\nabstract: 'By their very nature, Spin Waves (SWs) with different frequencies can propagate through the same waveguide without affecting each other, while only interfering with their own species. Therefore, more SW encoded data sets can coexist, propagate, and interact in parallel, which opens the road towards hardware replication free parallel data processing. In this paper, we take advantage of these features and propose a novel data parallel spin wave based computing approach. To explain and validate the proposed concept, byte-wide $2$-input XOR and $3$-input Majority gates are implemented and validated by means of Object Oriented MicroMagnetic Framework (OOMMF) simulations. Furthermore, we introduce an optimization algorithm meant to minimize the area overhead associated with multifrequency operation and demonstrate that it diminishes the byte-wide gate area by $30$% and $41$% for XOR and Majority implementations, respectively. To get inside on the practical implications of our proposal we compare the byte-wide gates with conventional functionally equivalent scalar SW gate based implementations in terms of area, delay, and power consumption. Our results indicate that the area optimized $8$-bit $2$-input XOR and $3$-input Majority gates require $4.47$x and $4.16$x less area, respectively, at the expense of $5$% and $7$% delay increase, respectively, without inducing" +"---\nabstract: 'When searching for exoplanets and ultimately considering their habitability, it is necessary to consider the planet\u2019s composition, geophysical processes, and geochemical cycles in order to constrain the bioessential elements available to life. Determining the elemental ratios for exoplanetary ecosystems is not yet possible, but we generally assume that planets have compositions similar to those of their host stars. Therefore, using the Hypatia Catalog of high-resolution stellar abundances for nearby stars, we compare the C, N, Si, and P abundance ratios of main sequence stars with those in average marine plankton, Earth\u2019s crust, as well as bulk silicate Earth and Mars. We find that, in general, plankton, Earth, and Mars are N-poor and P-rich compared with nearby stars. However, the dearth of P abundance data, which exists for only $\\sim$1% of all stars and 1% of exoplanet hosts, makes it difficult to deduce clear trends in the stellar data, let alone the role of P in the evolution of an exoplanet. Our Sun has relatively high P and Earth biology requires a small, but finite, amount of P. On rocky planets that form around host stars with substantially less P, the strong partitioning of P into the core could" +"---\nabstract: 'Embedding is a useful technique to project a high-dimensional feature into a low-dimensional space, and it has many successful applications including link prediction, node classification and natural language processing. Current approaches mainly focus on static data, which usually lead to unsatisfactory performance in applications involving large changes over time. How to dynamically characterize the variation of the embedded features is still largely unexplored. In this paper, we introduce a dynamic variational embedding (DVE) approach for sequence-aware data based on recent advances in recurrent neural networks. DVE can model the node\u2019s intrinsic nature and temporal variation explicitly and simultaneously, which are crucial for exploration. We further apply DVE to sequence-aware recommender systems, and develop an end-to-end neural architecture for link prediction.'\nauthor:\n- Meimei Liu Hongxia Yang\nbibliography:\n- 'ijcai19.bib'\ntitle: 'DVE: Dynamic Variational Embeddings with Applications in Recommender Systems'\n---\n\n[**Key Words:**]{} dynamic variational embedding, link prediction, neural collaborative filtering, recommendation system, sequence data.\n\nIntroduction\n============\n\nGraph embeddings aim to learn a low-dimensional representation for each node in a graph accurately capturing relationships among the nodes. This has wide applicability in many graph analysis tasks including node classification [@bhagat2011node], clustering [@ding2001min], recommendation [@liben2007link], and visualization [@maaten2008visualizing]. Various embedding" +"---\nabstract: 'An empirical forward-modeling framework is developed to interpret the multi-wavelength properties of Active Galactic Nuclei (AGN) and provide insights into the overlap and incompleteness of samples selected at different parts of the electromagnetic spectrum. The core of the model are observationally derived probabilites on the occupation of galaxies by X-ray selected AGN. These are used to seed mock galaxies drawn from stellar-mass functions with accretion events and then associate them with spectral energy distributions that describe both the stellar and AGN emission components. This approach is used to study the complementarity between X-ray and WISE mid-infrared AGN selection methods. We first show that the basic observational properties of the X-ray and WISE AGN (magnitude, redshift distributions) are adequately reproduced by the model. We then infer the level of contamination of the WISE selection and show that this is dominated by non-AGN at redshifts $z<0.5$. These are star-forming galaxies that scatter into the WISE AGN selection wedge because of photometric uncertainties affecting their colours. Our baseline model shows a sharp drop in the number density of heavily obscured AGN above the Compton thick limit in the WISE bands. The model also overpredicts by a factor of 1.5 the fraction" +"---\nabstract: 'Product-related question answering (QA) is an important but challenging task in E-Commerce. It leads to a great demand on automatic review-driven QA, which aims at providing instant responses towards user-posted questions based on diverse product reviews. Nevertheless, the rich information about personal opinions in product reviews, which is essential to answer those product-specific questions, is underutilized in current generation-based review-driven QA studies. There are two main challenges when exploiting the opinion information from the reviews to facilitate the opinion-aware answer generation: (i) jointly modeling opinionated and interrelated information between the question and reviews to capture important information for answer generation, (ii) aggregating diverse opinion information to uncover the common opinion towards the given question. In this paper, we tackle opinion-aware answer generation by jointly learning answer generation and opinion mining tasks with a unified model. Two kinds of opinion fusion strategies, namely, static and dynamic fusion, are proposed to distill and aggregate important opinion information learned from the opinion mining task into the answer generation process. Then a multi-view pointer-generator network is employed to generate opinion-aware answers for a given product-related question. Experimental results show that our method achieves superior performance in real-world E-Commerce QA datasets, and effectively" +"---\nabstract: 'We construct a series of one-dimensional non-unitary dynamics consisting of both unitary and imaginary evolutions based on the Sachdev-Ye-Kitaev model. Starting from a short-range entangled state, we analyze the entanglement dynamics using the path integral formalism in the large $N$ limit. Among all the results that we obtain, two of them are particularly interesting: (1) By varying the strength of the imaginary evolution, the interacting model exhibits a first order phase transition from the highly entangled volume law phase to an area law phase; (2) The one-dimensional free fermion model displays an extensive critical regime with emergent two-dimensional conformal symmetry.'\nauthor:\n- Chunxiao Liu\n- Pengfei Zhang\n- Xiao Chen\nbibliography:\n- 'SYK.bib'\ntitle: 'Non-unitary dynamics of Sachdev-Ye-Kitaev chain'\n---\n\n[^1]\n\n[^2]\n\nIntroduction\n============\n\nRecent years have witnessed tremendous breakthrough in many-body quantum dynamics. For a closed many-body quantum system decoupled from the environment, under the unitary dynamics, the interaction in the system can lead to chaos and thermalize all the small subsystems. The total wave function acts as its own heat bath and this phenomenon is referred as quantum thermalization [@Srednicki_1994; @Deutsch_1991].\n\nThe irreversible thermalization process can be avoided if we allow non-unitary evolution, which naturally arises" +"---\nabstract: 'We present an efficient algorithm to compute the induced norms of finite-horizon Linear Time-Varying (LTV) systems. The formulation includes both induced $\\mathcal{L}_2$ and terminal Euclidean norm penalties. Existing computational approaches include the power iteration and bisection of a Riccati Differential Equation (RDE). The power iteration has low computation time per iteration but overall convergence can be slow. In contrast, the RDE condition provides guaranteed bounds on the induced gain but single RDE integration can be slow. The complementary features of these two algorithms are combined to develop a new algorithm that is both fast and provides provable upper and lower bounds on the induced norm within the desired tolerance. The algorithm also provides a worst-case disturbance input that achieves the lower bound on the norm. We also present a new proof which shows that the power iteration for this problem converges monotonically. Finally, we show a controllability Gramian based simpler computational method for induced $\\mathcal{L}_2$-to-Euclidean norm. This can be used to compute the reachable set at any time on the horizon. Numerical examples are provided to demonstrate the proposed algorithm.'\nauthor:\n- |\n Jyot Buch\\\n Department of AEM\\\n University of Minnesota\\\n Minneapolis, MN 55455\\\n `buch0271@umn.edu`\\\n Murat Arcak\\\n Department" +"---\nabstract: 'Dense embedding models are commonly deployed in commercial search engines, wherein all the document vectors are pre-computed, and near-neighbor search (NNS) is performed with the query vector to find relevant documents. However, the bottleneck of indexing a large number of dense vectors and performing an NNS hurts the query time and accuracy of these models. In this paper, we argue that high-dimensional and ultra-sparse embedding is a significantly superior alternative to dense low-dimensional embedding for both query efficiency and accuracy. Extreme sparsity eliminates the need for NNS by replacing them with simple lookups, while its high dimensionality ensures that the embeddings are informative even when sparse. However, learning extremely high dimensional embeddings leads to blow up in the model size. To make the training feasible, we propose a partitioning algorithm that learns such high dimensional embeddings across multiple GPUs without any communication. This is facilitated by our novel asymmetric mixture of [**S**]{}parse, [**O**]{}rthogonal, [**L**]{}earned [**a**]{}nd [**R**]{}andom (SOLAR) Embeddings. The label vectors are random, sparse, and near-orthogonal by design, while the query vectors are learned and sparse. We theoretically prove that our way of one-sided learning is equivalent to learning both query and label embeddings. With these unique properties," +"---\nabstract: 'We revisit the problem of the growth of dense/cold gas in the cloud-crushing setup with radiative cooling. The relative motion between the dense cloud and the diffuse medium produces a turbulent boundary layer of mixed gas with a short cooling time. This mixed gas may explain the ubiquity of the range of absorption/emission lines observed in various sources such as the circumgalactic medium and galactic/stellar/AGN outflows. Recently Gronke & Oh showed that the efficient radiative cooling of the mixed gas can lead to the continuous growth of the dense cloud. They presented a threshold cloud size for the growth of dense gas which was contradicted by the more recent works of Li et al. & Sparre et al. These thresholds are qualitatively different as the former is based on the cooling time of the mixed gas whereas the latter is based on the cooling time of the hot gas. Our simulations agree with the threshold based on the cooling time of the mixed gas. We argue that the radiative cloud-crushing simulations should be run long enough to allow for the late-time growth of the dense gas due to cooling of the mixed gas but not so long that" +"---\nabstract: 'The paper explores the application of a continuous action space soft actor-critic (SAC) reinforcement learning model to the area of automated market-making. The reinforcement learning agent receives a simulated flow of client trades, thus accruing a position in an asset, and learns to offset this risk by either hedging at simulated \u201cexchange\u201d spreads or by attracting an offsetting client flow by changing offered client spreads (skewing the offered prices). The question of learning minimum spreads that compensate for the risk of taking the position is being investigated. Finally, the agent is posed with a problem of learning to hedge a blended client trade flow resulting from independent price processes (a \u201cportfolio\u201d position). The position penalty method is introduced to improve the convergence. An Open-AI gym-compatible hedge environment is introduced and the Open AI SAC baseline RL engine is being used as a learning baseline.'\nauthor:\n- |\n Alexey Bakshaev\\\n alex.bakshaev@gmail.com\nbibliography:\n- 'myrefs2.bib'\nnocite: '[@*]'\ntitle: 'Market-making with reinforcement-learning (SAC)'\n---\n\n=-10pt\n\nIntroduction\n============\n\nLet\u2019s assume that our goal is to train our agent in a way that it can perform market making effectively. In this trading mode our agent puts out both the price it is willing" +"---\nabstract: |\n Determining the best partition for a dataset can be a challenging task because of 1) the lack of a priori information within an unsupervised learning framework; and 2) the absence of a unique clustering validation approach to evaluate clustering solutions. Here we present [[reval]{}]{}: a package that leverages stability-based relative clustering validation methods to determine best clustering solutions as the ones that best generalize to unseen data.\n\n Statistical software, both in and , usually rely on internal validation metrics, such as *silhouette*, to select the number of clusters that best fits the data. Meanwhile, open-source software solutions that easily implement relative clustering techniques are lacking. Internal validation methods exploit characteristics of the data itself to produce a result, whereas relative approaches attempt to leverage the unknown underlying distribution of data points looking for generalizable and replicable results.\n\n The implementation of relative validation methods can further the theory of clustering by enriching the already available methods that can be used to investigate clustering results in different situations and for different data distributions. This work aims at contributing to this effort by developing a stability-based method that selects the best clustering solution as the one that replicates, via supervised" +"---\nabstract: 'We extend the notion of jittered sampling to arbitrary partitions and study the discrepancy of the related point sets. Let ${{\\mathbf{\\Omega}}}=(\\Omega_1,\\ldots,\\Omega_N)$ be a partition of $[0,1]^d$ and let the $i$th point in ${{\\mathcal P}}$ be chosen uniformly in the $i$th set of the partition (and stochastically independent of the other points), $i=1,\\ldots,N$. For the study of such sets we introduce the concept of a uniformly distributed triangular array and compare this notion to related notions in the literature. We prove that the expected ${{{\\mathcal L}}_p}$-discrepancy, ${{\\mathbb E}}{{{\\mathcal L}}_p}({{\\mathcal P}}_{{\\mathbf{\\Omega}}})^p$, of a point set ${{\\mathcal P}}_{{\\mathbf{\\Omega}}}$ generated from any equivolume partition ${{\\mathbf{\\Omega}}}$ is always strictly smaller than the expected ${{{\\mathcal L}}_p}$-discrepancy of a set of $N$ uniform random samples for $p>1$. For fixed $N$ we consider classes of stratified samples based on equivolume partitions of the unit cube into convex sets or into sets with a uniform positive lower bound on their reach. It is shown that these classes contain at least one minimizer of the expected ${{{\\mathcal L}}_p}$-discrepancy. We illustrate our results with explicit constructions for small $N$. In addition, we present a family of partitions that seems to improve the expected discrepancy of Monte Carlo sampling by" +"---\nabstract: 'Legged robot locomotion requires the planning of stable reference trajectories, especially while traversing uneven terrain. The proposed trajectory optimization framework is capable of generating dynamically stable base and footstep trajectories for multiple steps. The locomotion task can be defined with contact locations, base motion or both, making the algorithm suitable for multiple scenarios (e.g., presence of moving obstacles). The planner uses a simplified momentum-based task space model for the robot dynamics, allowing computation times that are fast enough for online replanning. This fast planning capability also enables the quadruped to accommodate for drift and environmental changes. The algorithm is tested on simulation and a real robot across multiple scenarios, which includes uneven terrain, stairs and moving obstacles. The results show that the planner is capable of generating stable trajectories in the real robot even when a box of 15 cm height is placed in front of its path at the last moment.'\nauthor:\n- 'Oguzhan Cebe$^{1}$, Carlo Tiseo$^{1}$, Guiyang Xin$^{1}$, Hsiu-chin Lin$^{2}$, Joshua Smith$^{1}$, Michael Mistry$^{1}$[^1][^2][^3]'\nbibliography:\n- 'IEEEabrv.bib'\n- 'main.bib'\ntitle: '**Online Dynamic Trajectory Optimization and Control for a Quadruped Robot** '\n---\n\nINTRODUCTION\n============\n\nLegged robots can traverse uneven terrains that are not suitable for wheeled" +"---\nabstract: 'Purported signatures of collective dynamics in small systems like proton-proton (pp) or proton-nucleus (p-A) collisions still lack unambiguous understanding. Despite the qualitative and/or quantitative agreement of the data to hydrodynamic models, it has remained unclear whether the harmonic flows in small systems relate to the common physical picture of hydrodynamic collectivity driven by the initial geometry. In the present work, we aim to address this issue by invoking a novel concept of Event Shape Engineering (ESE), which has been leveraged to get some control of the initial geometry in high-energy heavy-ion collisions. We utilise ESE by constructing a reference flow vector, $q_{2}$ that allows to characterise an event based on it\u2019s ellipticity. Applying this technique on a data set, simulated from a 3+1D viscous hydrodynamic model EPOS3, we study the event-shape dependent modifications to some of the bulk properties like, inclusive transverse momentum ($p_{T}$) spectra and $p_{T}$-differential $v_{2}$ for p-Pb collisions at 5.02 TeV. Selecting events on the basis of different magnitudes of reference flow vector $q_{2}$, we observe a hint of event-shape induced modifications of $v_{2}$ as a function of $p_{T}$ but, the inclusive $p_{T}$-spectra of charged particles seem to be insensitive to this event-shape selection.'\nauthor:" +"---\nabstract: 'A novel concept of quantum random access memory (qRAM) employing a quantum walk is provided. Our qRAM relies on a bucket brigade scheme to access the memory cells. Introducing a bucket with chirality [*left*]{} and [*right*]{} as a quantum walker, and considering its quantum motion on a full binary tree, we can efficiently deliver the bucket to the designated memory cells, and fill the bucket with the desired information in the form of quantum superposition states. Our procedure has several advantages. First, we do not need to place any quantum devices at the nodes of the binary tree, and hence in our qRAM architecture, the cost to maintain the coherence can be significantly reduced. Second, our scheme is fully parallelized. Consequently, only $O(n)$ steps are required to access and retrieve $O(2^n)$ data in the form of quantum superposition states. Finally, the simplicity of our procedure may allow the design of qRAM with simpler structures.'\nauthor:\n- |\n Ryo Asaka[^1], Kazumitsu Sakai[^2] and Ryoko Yahagi [^3]\\\n \\\n *Department of Physics, Tokyo University of Science,*\\\n *Kagurazaka 1-3, Shinjuku-ku, Tokyo, 162-8601, Japan*\\\n \\\n \\\n \\\ndate: 'August 31, 2020'\ntitle: Quantum random access memory via quantum walk\n---\n\nIntroduction\n============\n\nThe" +"---\nauthor:\n- 'Fabio\u00a0Montagna, Stefan\u00a0Mach, Simone\u00a0Benatti, Angelo\u00a0Garofalo, Gianmarco\u00a0Ottavi, Luca\u00a0Benini,\u00a0 Davide\u00a0Rossi,\u00a0 and Giuseppe\u00a0Tagliavini,\u00a0 [^1]'\nbibliography:\n- 'CONTENTS/bibliography.bib'\ntitle: 'A Transprecision Floating-Point Cluster for Efficient Near-Sensor Data Analytics'\n---\n\n\\[sec:intro\\] pervasive adoption of edge computing is increasing the computational demand for algorithms targeted on embedded devices. Besides the aggressive optimization strategies adopted on the algorithmic side [@tagliavini2018transprecision], there is a great effort to find the best trade-off between architectural features and computational capabilities [@pullini2019mr]. Indeed, deploying artificial intelligence algorithms or digital signal processing (DSP) on near-sensor devices poses several challenges to resource-constrained low-power embedded systems. Fixed-point arithmetic is a well-established paradigm in embedded systems optimization since it allows a simplified numerical representation for real numbers at high energy efficiency [@barrois2017customizing]. Nevertheless, many applications require high precision results characterized by a wide dynamic range (e.g., the accumulation stage of support vectors, or feed-forward inference for deep neural networks). In these cases, fixed-point implementations may suffer from numerical instability, requiring an in-depth analysis to make the result reliable and additional code sections to normalize and adjust the dynamic range avoiding saturation (e.g., the fixed-point implementation of linear time-invariant digital filters described in [@volkova20]). As a result," +"---\nabstract: 'Supernova (SN) cosmology is based on the assumption that the corrected luminosity of SN Ia would not evolve with redshift. Recently, our age dating of stellar populations in early-type host galaxies (ETGs) from high-quality spectra has shown that this key assumption is most likely in error. It has been argued though that the age-Hubble residual (HR) correlation from ETGs is not confirmed from two independent age datasets measured from multi-band optical photometry of host galaxies of all morphological types. Here we show, however, that one of them is based on highly uncertain and inappropriate luminosity-weighted ages derived, in many cases, under serious template mismatch. The other dataset employs more reliable mass-weighted ages, but the statistical analysis involved is affected by regression dilution bias, severely underestimating both the slope and significance of the age-HR correlation. Remarkably, when we apply regression analysis with a standard posterior sampling method to this dataset comprising a large sample ($N=102$) of host galaxies, very significant ($> 99.99 \\%$) correlation is obtained between the global population age and HR with the slope ($-0.047 \\pm 0.011$\u00a0mag/Gyr) highly consistent with our previous spectroscopic result from ETGs. For the local age of the environment around the site" +"---\nabstract: 'Based on hierarchical partitions, we provide the construction of Haar-type tight framelets on any compact set $K\\subseteq {\\mathbb{R}}^d$. In particular, on the unit block $[0,1]^d$, such tight framelets can be built to be with adaptivity and directionality. We show that the adaptive directional Haar tight framelet systems can be used for digraph signal representations. Some examples are provided to illustrate results in this paper.'\nauthor:\n- Yuchen Xiao\n- Xiaosheng Zhuang\ndate: 'Received: date / Accepted: date'\ntitle: Adaptive directional Haar tight framelets on bounded domains for digraph signal representations \n---\n\nIntroduction and motivation {#sec:intro}\n===========================\n\nHarmonic analysis including Fourier analysis, frame theory, wavelet/framelet analysis, etc., has been one of the most active research areas in mathematics over the past two centuries [@Book:Stein1993]. Typical harmonic analysis is on theory and applications related to functions defined on regular Euclidean domains [@Book:Chui; @CDV; @Book:Daubechies1992; @Book:Han; @HM:AA; @HanZhuang:alg.num; @Book:Shearlets; @Book:Mallat]. In recent years, driven by the rapid progress of deep learning and their successful applications in solving AI (artificial intelligence) related tasks, such as natural language processing, autonomous systems, robotics, medical diagnostics, and so on, there has been a great interest in developing harmonic analysis for data defined on non-Euclidean domains" +"---\nabstract: 'Beklemishev introduced an ordinal notation system for the Feferman-Sch\u00fctte ordinal $\\Gamma_0$ based on the *autonomous expansion* of provability algebras. In this paper we present the logic ${\\textbf{\\textup{BC}}}$ (for *Bracket Calculus*). The language of ${\\textbf{\\textup{BC}}}$ extends said ordinal notation system to a strictly positive modal language. Thus, unlike other provability logics, ${\\textbf{\\textup{BC}}}$ is based on a self-contained signature that gives rise to an ordinal notation system instead of modalities indexed by some ordinal given [*a priori.*]{} The presented logic is proven to be equivalent to ${\\bm{\\mathbf{RC}_{\\Gamma_0}}}$, that is, to the strictly positive fragment of ${\\bm{\\mathbf{GLP}_{\\Gamma_0}}}$. We then define a combinatorial statement based on ${\\textbf{\\textup{BC}}}$ and show it to be independent of the theory ${\\mathbf{ATR}_0}$ of Arithmetical Transfinite Recursion, a theory of second order arithmetic far more powerful than Peano Arithmetic.'\nauthor:\n- |\n David Fern\u00e1ndez-Duque[^1]\\\n Eduardo Hermo Reyes[^2]\nbibliography:\n- 'References.bib'\ntitle: 'Deducibility and Independence in Beklemishev\u2019s Autonomous Provability Calculus'\n---\n\nIntroduction\n============\n\nIn view of G\u00f6del\u2019s second incompleteness theorem, we know that the consistency of any sufficiently powerful formal theory cannot be established using purely \u2018finitary\u2019 means. Since then, the field of proof theory, and more specifically of ordinal analysis, has been successful in measuring the non-finitary assumptions" +"---\nabstract: 'There have been many attempts to construct de Sitter space-times in string theory. While arguably there have been some successes, this has proven challenging, leading to the de Sitter swampland conjecture: quantum theories of gravity do not admit stable or metastable de Sitter space. Here we explain that, within controlled approximations, one lacks the tools to construct de Sitter space in string theory. Such approximations would require the existence of a set of (arbitrarily) small parameters, subject to severe constraints. But beyond this one also needs an understanding of big-bang and big-crunch singularities that is not currently accessible to standard approximations in string theory. The existence or non-existence of metastable de Sitter space in string theory remains a matter of conjecture.'\nbibliography:\n- 'desitter\\_space\\_in\\_string\\_theory.bib'\n---\n\n[SCIPP 20/08 ]{}\n\n1.2cm\n\n[**Obstacles to Constructing de Sitter Space in String Theory**]{}\n\n1.4cm\n\n[Michael Dine[*$^a$*]{}, Jamie A.P. Law-Smith[*$^b$*]{}, Shijun Sun[*$^a$*]{}, Duncan Wood[*$^a$*]{}, and Yan Yu[*$^a$*]{}]{}\\\n0.4cm [*$^{(a)}$Santa Cruz Institute for Particle Physics and\\\nDepartment of Physics, University of California at Santa Cruz\\\n1156 High St, Santa Cruz, CA 95064, USA*]{}\\\n\u00a0\\\n[*$^{(b)}$Department of Astronomy and Astrophysics, University of California at Santa Cruz\\\n1156 High St, Santa Cruz, CA 95064, USA*]{}\\\n\nIntroduction: The" +"---\nabstract: 'We examine how the mass assembly of central galaxies depends on their location in the cosmic web. The [Horizon-AGN]{} simulation is analysed at $z\\sim2$ using the [DisPerSE]{} code to extract multi-scale cosmic filaments. We find that the dependency of galaxy properties on large-scale environment is mostly inherited from the (large-scale) environmental dependency of their host halo mass. When adopting a residual analysis that removes the host halo mass effect, we detect a direct and non-negligible influence of cosmic filaments. Proximity to filaments enhances the build-up of stellar mass, a result in agreement with previous studies. However, our multi-scale analysis also reveals that, at the edge of filaments, star formation is suppressed. In addition, we find clues for compaction of the stellar distribution at close proximity to filaments. We suggest that gas transfer from the outside to the inside of the haloes (where galaxies reside) becomes less efficient closer to filaments, due to high angular momentum supply at the vorticity-rich edge of filaments. This quenching mechanism may partly explain the larger fraction of passive galaxies in filaments, as inferred from observations at lower redshifts.'\nauthor:\n- |\n \\\n $^{1}$ Department of Astronomy, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul 03722," +"---\nabstract: 'In standard process algebra, parallel components do not share a common state and communicate through synchronisation. The advantage of this type of communication is that it facilitates compositional reasoning. For modelling and analysing systems in which parallel components operate on shared memory, however, the communication-through-synchronisation paradigm is sometimes less convenient. In this paper we study a process algebra with a notion of global variable. We also propose an extension of Hennessy-Milner logic with predicates to test and set the values of the global variables, and prove correspondence results between validity of formulas in the extended logic and stateless bisimilarity and between validity of formulas in the extended logic without the set operator and state-based bisimilarity. We shall also present a translation from the process algebra with global variables to a fragment of mCRL2 that preserves the validity of formulas in the extended Hennessy-Milner logic.'\nauthor:\n- 'Mark Bouwman Bas Luttik Wouter Schols Tim A.C. Willemse'\nbibliography:\n- 'bibliography.bib'\ntitle: A process algebra with global variables\n---\n\nIntroduction\n============\n\nCommunication between parallel components in real world systems takes many forms: packets over a network, inter-process communication, communication via shared memory, communication over a bus, etcetera. Process algebras usually offer" +"---\nabstract: 'This article presents an approach to encode Linear Temporal Logic (LTL) Specifications into a Mixed Integer Quadratically Constrained Quadratic Program (MIQCQP) footstep planner. We propose that the integration of LTL specifications into the planner not only facilitates safe and desirable locomotion between obstacle-free regions, but also provides a rich language for high-level reasoning in contact planning. Simulations of the footstep planner in a 2D environment satisfying encoded LTL specifications demonstrate the results of this research.'\nauthor:\n- Vikram Ramanathan\nbibliography:\n- 'IEEEabrv.bib'\n- 'IEEEexample.bib'\ntitle: |\n ***Footstep Planning with Encoded Linear Temporal Logic Specifications\\\n *** \n---\n\nIntroduction\n============\n\nHumanoid locomotion is accomplished by changing its footholds. In order to move feasibly and efficiently, a footstep planner must be able to find sequences of foot positions and orientations that will realize the robot\u2019s locomotion objectives. One established solution for the footstep planning problem is to perform some discrete sampling-based search algorithm over reachable foot positions and orientations. This approach first determines candidate footsteps through one of several methods, including:\n\n- intersecting valid sample limb configurations with the environment to determine which limb configurations are close to contact and then projecting these configurations onto contact using inverse kinematics [@tonneau2018efficient]\n\n-" +"---\nauthor:\n- 'N. Meunier , A.-M. Lagrange'\nbibliography:\n- 'biblio.bib'\ndate: 'Received ; Accepted'\ntitle: 'The effects of granulation and supergranulation on Earth-mass planet detectability in the habitable zone around F6-K4 stars.'\n---\n\nIntroduction\n============\n\nA large number of exoplanets have been detected using indirect techniques for over 20 years. However, because these techniques are indirect, they are very sensitive to stellar variability. The radial velocity (RV) technique is particularly sensitive to activity that is due to both magnetic and dynamical processes at different temporal scales. Many studies have focussed on stellar magnetic activity [recognised early on by @saar97] based on simulations of simple spot configurations [e.g. @desort07; @boisse12; @dumusque12] as well as more complex patterns [e.g. @lagrange10b; @meunier10; @meunier10a; @borgniet15; @santos15; @dumusque16; @herrero16; @dumusque17; @meunier19; @meunier19b; @meunier19c]. Flows on different spatial and temporal scales also play an important role: in addition to large-scale flows such as meridional circulation [@makarov10; @meunier20c], oscillations, granulation, and supergranulation also affect RV time series.\n\nThe properties of these small-scale flows and the mitigating techniques used to remove them (mostly averaging techniques) have been studied in several works [e.g. @dumusque11b; @cegla13; @meunier15; @cegla15; @sulis16; @sulis17; @cegla18; @meunier19e; @cegla19; @chaplin19] for the Sun and other" +"---\nauthor:\n- 'R.\u00a0Staubert, L.\u00a0Ducci, L.\u00a0Ji, F.\u00a0F\u00fcrst, J.\u00a0Wilms, R.E.\u00a0Rothschild, K.\u00a0Pottschmidt, M.\u00a0Brumback , F.\u00a0Harrison'\ndate: 'submitted: 06/07/2020, accepted: 17/08/2020 '\ntitle: 'The cyclotron line energy in Her X-1: stable after the decay'\n---\n\nIntroduction\n============\n\nThe eclipsing binary Her\u00a0X-1/HZ\u00a0Her is a low mass X-ray binary (LMXB), discovered as an X-ray source by the first X-ray satellite *UHURU* in 1971 [@Tananbaum_etal72]. Similar to Cen\u00a0X-3, the source was identified as an X-ray pulsar, powered by mass accretion from its companion. Her\u00a0X-1 is one of the most interesting X-ray pulsars due to its wide variety of observable features. Of the many introductions to this source we refer to some of the most recent ones, e.g., @Staubert_etal17 [@Staubert_etal19; @Sazonov_etal20]. In order to maintain some degree of completeness within this contribution we list the following main features of Her\u00a0X-1: the spin period of the neutron star is 1.24s, the orbital period is 1.7d (identified by eclipses and the modulation of the pulse arrival times), there is a super-orbital flux modulation with a somewhat variable period of $\\sim$35d. This *On-Off* variation can be understood as being due to the precession of a warped" +"---\nabstract: 'Numerical methods for the optimal transport problem is an active area of research. Recent work of Kitagawa and Abedin shows that the solution of a time-dependent equation converges exponentially fast as time goes to infinity to the solution of the optimal transport problem. This suggests a fast numerical algorithm for computing optimal maps; we investigate such an algorithm here in the 1-dimensional case. Specifically, we use a finite-difference scheme to solve the time-dependent optimal transport problem and carry out an error analysis of the scheme. A collection of numerical examples is also presented and discussed.'\nauthor:\n- |\n Abby Brauer\\\n Lewis and Clark College\\\n abrauer@lclark.edu\\\n \\\n Megan Krawick\\\n Youngstown State University\\\n mekrawick@student.ysu.edu\\\n \\\n Manuel Santana\\\n Utah State University\\\n manuelarturosantana@gmail.com\\\n \\\n Advisors:\\\n Farhan Abedin\\\n abedinf1@msu.edu\\\n Michigan State University\\\n \\\n Jun Kitagawa\\\n kitagawa@math.msu.edu\\\n Michigan State University\nbibliography:\n- 'MAarxiv.bib'\ntitle: '**Numerical Analysis of the 1-D Parabolic Optimal Transport Problem**'\n---\n\nIntroduction\n============\n\nThe Optimal Transport Problem\n-----------------------------\n\nThe centuries old optimal transport problem asks how to find the cheapest way to transport materials from a given source to a target location [@Monge]. In the 1-dimensional case and for the quadratic cost function, the mathematical formulation of the problem is as" +"---\nabstract: 'The concept of intelligent system has emerged in information technology as a type of system derived from successful applications of artificial intelligence. The goal of this paper is to give a general description of an intelligent system, which integrates previous approaches and takes into account recent advances in artificial intelligence. The paper describes an intelligent system in a generic way, identifying its main properties and functional components. The presented description follows a pragmatic approach to be used in an engineering context as a general framework to analyze and build intelligent systems. Its generality and its use is illustrated with real-world system examples and related with artificial intelligence methods.'\nauthor:\n- Martin Molina\nbibliography:\n- 'main.bib'\ndate: December 2022\ntitle: 'What is an intelligent system?'\n---\n\nIntroduction\n============\n\nMankind has made significant progress through the development of increasingly powerful and sophisticated tools. In the age of the industrial revolution, a large number of tools were built as machines that automated tasks requiring physical effort. In the digital age, computer-based tools are being created to automate tasks that require mental effort. The capabilities of these tools have been progressively increased to perform tasks that require more and more intelligence. This" +"---\nabstract: 'We present a study of radiation propagation through disordered amplifying honeycomb photonic lattice, where elastic scattering provides feedback for light generation. To explore the interplay of different scattering mechanisms and the amplification background, we consider the Dirac Hamiltonian with a random potential and derive diffusion equation for the average intensity of light. The transmission coefficient and interference correction to the diffusion coefficient are enhanced near the lasing threshold. The transition between weak anti-localization and weak localization behaviours might be controlled by the parameters associated with the amplification and inter-valley scattering rates.'\nauthor:\n- SK Firoz Islam\n- 'Alexander A. Zyuzin'\nbibliography:\n- 'disorder\\_PhL\\_v2.bib'\ntitle: Theory of light diffusion through amplifying photonic lattice\n---\n\nIntroduction\n============\n\nDisordered amplifying optical media have received much attention, most particularly for their diverse applications as random lasers [@letokhov1968generation; @turitsyn2010random; @wiersma2013; @wiersma2008physics; @cao2003lasing], spatial light confinement or coherent control [@PhysRevLett.84.5584; @riboli; @PhysRevLett.112.133903; @PhysRevApplied.12.064045], and application to medical technology [@yun2017light; @polson2004random]. The wave interference processes leading to weak localization effects [@doi:10.1063] has been studied extensively with and without amplification backgrounds [@wiersma1997; @PhysRevB.55.5736; @Stephen; @PhysRevE.51.5274; @PhysRevB.56.12038; @Zyuzin_1994; @PhysRevB.89.224202; @PhysRevE.54.4256; @PhysRevB.52.7960; @PhysRevB.50.9644] in three dimensional optical media, followed by a number of experiments [@deOliveira:97; @deOliveira:96; @PhysRevA.64.063808; @PhysRevLett.93.263901;" +"---\nabstract: 'In this paper, based on the new version of the gedanken experiments proposed by Sorce and Wald, we examine the weak cosmic censorship in the perturbation process of accreting matter fields for the charged dilaton-Lifshitz black holes. In the investigation, we assume that the black hole is perturbed by some extra matter source satisfied the null energy condition and ultimately settle down to a static charged dilaton-Lifshitz black hole in the asymptotic future. Then, after applying the Noether charge method, we derive the first-order and second-order perturbation inequalities of the perturbation matter fields. As a result, we find that the nearly extremal charged dilaton-Lifshitz black hole cannot be destroyed under the second-order approximation of perturbation. This result implies that the weak cosmic censorship conjecture might be a general feature of the Einstein gravity, and it is independent of the asymptotic behaviors of the black holes.'\nauthor:\n- Jie Jiang\n- Ming Zhang\ntitle: 'New version of the gedanken experiments to test the weak cosmic censorship in charged dilaton-Lifshitz black holes'\n---\n\nIntroduction\n============\n\nThe general relativity predicts the existence of the black hole. There is a central singularity for most of the black holes. However, the singularity will" +"---\nabstract: 'We prove that the notion of a voltage graph lift comes from an adjunction between the category of voltage graphs and the category of group labeled graphs.'\naddress: |\n Department of Mathematics and Descriptive Geometry\\\n Faculty of Civil Engineering, Slovak University of Technology, Slovak Republic \nauthor:\n- Gejza Jen\u010da\ntitle: Voltage lifts of graphs from a category theory viewpoint\n---\n\n[^1]\n\nIntroduction\n============\n\nIn this paper, a [*graph*]{} means a structure sometimes called a [*symmetric multidigraph*]{} \u2013 that means that it may have multiple darts with the same source and target, and the set of all darts of the graph is equipped with an involutive mapping $\u03bb$ that maps every dart to a dart with source and target swapped.\n\nA [*voltage graph*]{} is a graph in which every dart is labeled with an element of a group in a way that respects the involutive symmetry $\u03bb$, so that the label of a dart $d$ is inverse to the label of $\u03bb(d)$. Similarly, a [*group labeled graph*]{} has all vertices labeled with elements of a group.\n\nIn [@gross1974voltage] Gross introduced the construction of a [*derived graph of a voltage graph*]{}. Nowadays, derived voltage graphs are called [*(ordinary) voltage graph" +"---\nabstract: 'We analyze a recent treatment of the interaction of a magnetic quadrupole moment with a radial electric field for a non-relativistic particle in a rotating frame and show that the derivation of the equations in the paper is anything but rigorous. The authors presented eigenvalues and eigenfunctions for two sets of quantum numbers as if they belonged to the same physical problem when they are solutions for two different models. In addition to it, the authors failed to comment on the possibility of multiple solutions for every set of quantum numbers.'\nauthor:\n- |\n Francisco M. Fern\u00e1ndez[^1]\\\n INIFTA, DQT, Sucursal 4, C.C 16,\\\n 1900 La Plata, Argentina\ntitle: 'Comment on: \u201cInteraction of the magnetic quadrupole moment of a non-relativistic particle with an electric field in a rotating frame. Ann. Phys. 412 (2020) 168040\u201d'\n---\n\nIn a recent paper[@HMM20] the authors studied the interaction of a magnetic quadrupole moment with a radial electric field for a non-relativistic particle in a rotating frame. They solved the Schr\u00f6dinger equation for a model potential by means of a power-series method and obtained the lowest eigenvalues and eigenfunctions. In this Comment we analyze the derivation of the main equations and discuss their solutions." +"---\nabstract: 'Online media platforms have enabled users to connect with individuals, organizations, and share their thoughts. Other than connectivity, these platforms also serve multiple purposes - education, promotion, updates, awareness, etc. Increasing the reputation of individuals in online media (*aka Social growth*) is thus essential these days, particularly for business owners and event managers who are looking to improve their publicity and sales. The natural way of gaining social growth is a tedious task, which leads to the creation of unfair ways to boost the reputation of individuals artificially. Several online blackmarket services have developed thriving ecosystem with lucrative offers to attract content promoters for publicizing their content online. These services are operated in such a way that most of their inorganic activities are being unnoticed by the media authorities, and the customers of the blackmarket services are less likely to be spotted. We refer to such unfair ways of bolstering social reputation in online media as *collusion*. This survey is the first attempt to provide readers a comprehensive outline of the latest studies dealing with the identification and analysis of blackmarket-driven collusion in online media. We present a broad overview of the problem, definitions of the related problems" +"---\nabstract: 'A novel method is proposed to ensure stability and constraint satisfaction, i.e. \u201ccompatibility\", for nonlinear affine systems. We require an asymptotically stabilizing control law and a zeroing control barrier function (ZCBF), and define a region of attraction for which the proposed control safely stabilizes the system. Our methodology requires checking conditions of the system dynamics over the state space, which may be computationally expensive. To facilitate the search for compatibility, we extend the results to a class of nonlinear systems including mechanical systems for which a novel controller is designed to guarantee passivity, safety, and stability. The proposed technique is demonstrated using numerical examples.'\nauthor:\n- 'Wenceslao Shaw Cortez, and Dimos V. Dimarogonas, [^1]'\nbibliography:\n- 'IEEEabrv.bib'\n- 'ShawCortez\\_CBFRH.bib'\ntitle: |\n **On Compatibility and Region of Attraction for Safe,\\\n Stabilizing Control Laws** \n---\n\nIntroduction\n============\n\nZeroing control barrier functions (ZCBFs) have gained attention for constraint satisfaction of nonlinear systems [@Ames2019]. ZCBFs are robust to perturbations [@Ames2019; @Xu2015a], can respect input constraints [@ShawCortez2020; @Squires2018], and are less restrictive than those of Lyapunov-based methods as the derivative of the ZCBFs need not be positive semi-definite [@Ames2019].\n\nOther constraint satisfying methods include Nonlinear Model Predictive Control (NMPC) and reference governors (RG)." +"---\nabstract: 'Transverse spherocity is an event shape observable having a unique capability to separate the events based on their geometrical shapes. Recent results from experiments at the LHC suggest that transverse spherocity is an important event classifier in small collision systems. In this work, we use transverse spherocity for the first time in heavy-ion collisions and perform an extensive study on azimuthal anisotropy of charged particles produced in Pb-Pb collisions at $\\sqrt{s_{\\rm{NN}}} = 5.02$\u00a0TeV using A Multi-Phase Transport Model (AMPT). The azimuthal anisotropy is estimated using the 2-particle correlation method, which suppresses the non-flow effects significantly with an appropriate pseudorapidity gap of particle pairs. The results from AMPT are compared with estimations from PYTHIA8 (Angantyr) model and it is found that with the chosen pseudorapidity gap, the residual non-flow effects become negligible. We found that the high spherocity events have nearly zero elliptic flow while low spherocity events contribute significantly to elliptic flow of spherocity-integrated events.'\nauthor:\n- Neelkamal Mallick\n- 'Raghunath Sahoo[^1]'\n- Sushanta Tripathy\n- Antonio Ortiz\ntitle: 'Study of Transverse Spherocity and Azimuthal Anisotropy in Pb-Pb collisions at $\\sqrt{s_{\\rm{NN}}} = 5.02$\u00a0TeV using A Multi-Phase Transport Model'\n---\n\nIntroduction {#intro}\n============\n\nQuark Gluon Plasma (QGP)," +"---\nauthor:\n- 'D. R. Bett'\n- 'P. N. Burrows'\n- 'C. Perry'\n- 'R. Ramjiawan'\n- 'N. Terunuma'\n- 'K. Kubo'\n- 'T. Okugi'\ntitle: 'A sub-micron resolution, bunch-by-bunch beam trajectory feedback system and its application to reducing wakefield effects in single-pass beamlines'\n---\n\n\\[s:Intro\\]Introduction\n=======================\n\nThe Accelerator Test Facility (ATF) is a research facility located at the High Energy Accelerator Research Organization (KEK) in Tsukuba, Japan. The ATF is intended to facilitate the development of technologies and techniques required for the realization of a future linear electron-positron collider, either the International Linear Collider (ILC)\u00a0[@JAI-2013-001] or the Compact Linear Collider (CLIC)\u00a0[@CERN-2018-010-M]. The ATF is shown schematically in Figure\u00a0\\[f:SchematicATF\\]; it consists of an RF gun, a 1.3\u00a0GeV electron linac, a damping ring, and a beamline known as ATF2\u00a0[@SLAC-R-771; @SLAC-R-796]. At the end of the ATF2 beamline, a pair of powerful quadrupole magnets is used to focus the electron beam to the smallest size possible at a location known as the interaction point (IP). The ATF2 beamline is shown in more detail in Figure\u00a0\\[f:SchematicATF2\\].\n\nThe ATF2 Collaboration has two goals. Goal 1 is the production of a 37\u00a0nm vertical beam spot size at the" +"---\nauthor:\n- 'T.-Q. Cang'\n- 'P. Petit'\n- 'J.-F. Donati'\n- 'C.P. Folsom'\n- 'M. Jardine'\n- 'C. Villarreal D\u2019Angelo'\n- |\n \\\n A.A. Vidotto\n- 'S.C. Marsden'\n- 'F. Gallet'\n- 'B. Zaire'\nbibliography:\n- 'ap149\\_draft.bib'\ntitle: 'Magnetic field and prominences of the young, solar-like, ultra-rapid rotator V530 Per'\n---\n\n[Young solar analogs reaching the main sequence experience very strong magnetic activity, generating angular momentum losses through wind and mass ejections.]{} [We investigate signatures of magnetic fields and activity at the surface and in the prominence system of the ultra-rapid rotator V530 Per, a G-type solar-like member of the young open cluster $\\alpha$\u00a0Persei. This object has a rotation period that is shorter than all stars with available magnetic maps.]{} [With a time-series of spectropolarimetric observations gathered with ESPaDOnS over two nights on the Canada-France-Hawaii Telescope (CFHT), we reconstructed the surface brightness and large-scale magnetic field of V530 Per using the Zeeman-Doppler imaging method, assuming an oblate stellar surface. We also estimated the short term evolution of the brightness distribution through latitudinal differential rotation. Using the same data set, we finally mapped the spatial distribution of prominences through tomography of the [H$\\alpha$]{}\u00a0emission.]{} [The brightness map is dominated" +"---\nabstract: 'Gaussian processes (GPs) serve as flexible surrogates for complex surfaces, but buckle under the cubic cost of matrix decompositions with big training data sizes. Geospatial and machine learning communities suggest pseudo-inputs, or inducing points, as one strategy to obtain an approximation easing that computational burden. However, we show how placement of inducing points and their multitude can be thwarted by pathologies, especially in large-scale dynamic response surface modeling tasks. As remedy, we suggest porting the inducing point idea, which is usually applied globally, over to a more local context where selection is both easier and faster. In this way, our proposed methodology hybridizes global inducing point and data subset-based local GP approximation. A cascade of strategies for planning the selection of local inducing points is provided, and comparisons are drawn to related methodology with emphasis on computer surrogate modeling applications. We show that local inducing points extend their global and data-subset component parts on the accuracy\u2013computational efficiency frontier. Illustrative examples are provided on benchmark data and a large-scale real-simulation satellite drag interpolation problem.'\nauthor:\n- 'D.\u00a0Austin Cole[^1]'\n- 'Ryan Christianson[^2]'\n- 'Robert B.\u00a0Gramacy'\nbibliography:\n- 'liGP\\_articles.bib'\ntitle: |\n Locally induced Gaussian processes for\\\n large-scale simulation experiments" +"---\nauthor:\n- 'A.\u00a0Bocci,'\n- 'M.\u00a0Kortelainen,'\n- 'V.\u00a0Innocente,'\n- 'F.\u00a0Pantaleo,'\n- 'M.\u00a0Rovere'\nbibliography:\n- 'ms.bib'\ntitle: Heterogeneous reconstruction of tracks and primary vertices with the CMS pixel tracker\n---\n\nIntroduction\n============\n\nThe High-Luminosity upgrade of the LHC\u00a0[@apollinari2017high] will pose unprecedented challenges to the reconstruction software used by the experiments due to the increase both in instantaneous luminosity and readout rate. In particular, the CMS experiment at CERN\u00a0[@collaboration2008cms] has been designed with a two-levels trigger system: the Level 1 Trigger, implemented on custom-designed electronics, and the *High Level Trigger* (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a trade-off between the complexity of the algorithms running on the available computing resources, the sustainable output rate, and the selection efficiency.\n\nWhen the HL-LHC will be operational, it will reach a luminosity of with an average pileup of proton-proton collisions. To fully exploit the higher luminosity, the CMS experiment will increase the full readout rate from to \u00a0[@l1triggerTDR:2714892]. The higher luminosity, pileup and input rate present an exceptional challenge to the HLT, that will require a processing power larger than today by more than" +"---\nabstract: 'We report a numerical study of the equation of state of crystalline body-centered-cubic (BCC) hydrogen, tackled with a variety of complementary many-body wave function methods. These include continuum stochastic techniques of fixed-node diffusion and variational quantum Monte Carlo, and the Hilbert space stochastic method of full configuration-interaction quantum Monte Carlo. In addition, periodic coupled-cluster methods were also employed. Each of these methods is underpinned with different strengths and approximations, but their combination in order to perform reliable extrapolation to complete basis set and supercell size limits gives confidence in the final results. The methods were found to be in good agreement for equilibrium cell volumes for the system in the BCC phase, with a lattice parameter of 3.307 Bohr.'\nauthor:\n- Sam Azadi\n- 'George H. Booth'\n- 'Thomas D. K\u00fchne'\ntitle: 'Equation of state of atomic solid hydrogen by stochastic many-body wave function methods'\n---\n\nIntroduction\n============\n\nA stochastic description of quantum mechanics has significant advantages in the understanding of quantum systems, especially when a large number of degrees of freedom are involved. The main advantage of this approach relies on the exploitation of well-established mathematical bounds derived from probability theory and stochastic processes to control the" +"---\nabstract: 'The bipartite quantum and thermal entanglement is quantified within pure and mixed states of a mixed spin-(1/2,1) Heisenberg dimer with the help of negativity. It is shown that the negativity, which may serve as a measure of the bipartite entanglement at zero as well as nonzero temperatures, strongly depends on intrinsic parameters as for instance exchange and uniaxial single-ion anisotropy in addition to extrinsic parameters such as temperature and magnetic field. It turns out that a rising magnetic field unexpectedly reinforces the bipartite entanglement due to the Zeeman splitting of energy levels, which lifts a two-fold degeneracy of the quantum ferrimagnetic ground state. The maximal bipartite entanglement is thus reached within a quantum ferrimagnetic phase at sufficiently low but nonzero magnetic fields on assumption that the gyromagnetic g-factors of the spin-1/2 and spin-1 magnetic ions are equal and the uniaxial single-ion anisotropy is a half of the exchange constant. It is suggested that the heterodinuclear complex \\[Ni(dpt)(H$_2$O)Cu(pba)\\]$\\cdot$2H$_2$O (pba=1,3-propylenebis(oxamato) and dpt=bis-(3-aminopropyl)amine), which affords an experimental realization of the mixed spin-(1/2,1) Heisenberg dimer, remains strongly entangled up to relatively high temperatures (about 140\u00a0K) and magnetic fields (about 140\u00a0T) being comparable with the relevant exchange constant.'\nauthor:\n- 'Hana" +"---\nabstract: 'Nested simulation arises frequently in [risk management]{} or uncertainty quantification problems, where the performance measure is a function of the simulation output mean conditional on the outer scenario. The standard nested simulation samples $M$ outer scenarios and runs $N$ inner replications at each. We propose a new experiment design framework for a problem whose inner replication\u2019s inputs are generated from distributions parameterized by the outer scenario. This structure lets us pool replications from an outer scenario to estimate another scenario\u2019s conditional mean via the likelihood ratio method. We formulate a bi-level optimization problem to decide not only which of $M$ outer scenarios to simulate and how many times to replicate at each, but also how to pool these replications such that the total simulation effort is minimized while achieving a target level of [precision]{}. The resulting optimal design requires far less simulation effort than $MN$. We provide asymptotic analyses on the convergence rates of the performance measure estimators computed from the experiment design. Empirical results show that our experiment design reduces the simulation effort by orders of magnitude compared to the standard nested simulation and outperforms a state-of-the-art regression-based design that pools replications via regression.'\nauthor:\n- Mingbin" +"---\nabstract: 'In order to certify performance and safety, feedback control requires precise characterization of sensor errors. In this paper, we provide guarantees on such feedback systems when sensors are characterized by solving a supervised learning problem. We show a uniform error bound on nonparametric kernel regression under a dynamically-achievable dense sampling scheme. This allows for a finite-time convergence rate on the sub-optimality of using the regressor in closed-loop for waypoint tracking. We demonstrate our results in simulation with simplified unmanned aerial vehicle and autonomous driving examples.'\nauthor:\n- |\n Sarah Dean and Benjamin Recht\\\n Department of EECS, University of California, Berkeley\nbibliography:\n- 'refs.bib'\ntitle: 'Certainty Equivalent Perception-Based Control'\n---\n\nIntroduction\n============\n\nMachine learning provides a promising avenue for incorporating rich sensing modalities into autonomous systems. However, our coarse understanding of how ML systems fail limits the adoption of data-driven techniques in real-world applications. In particular, applications involving feedback require that errors do not accumulate and lead to instability. In this work, we propose and analyze a baseline method for incorporating a learning-enabled component into closed-loop control, providing bounds on the sample complexity of a reference tracking problem.\n\nMuch recent work on developing guarantees for learning and control has" +"---\nabstract: 'We present an overview of the GOTHAM (GBT Observations of TMC-1: Hunting Aromatic Molecules) Large Program on the Green Bank Telescope. This and a related program were launched to explore the depth and breadth of aromatic chemistry in the interstellar medium at the earliest stages of star formation, following our earlier detection of benzonitrile ($c$-) in TMC-1. In this work, details of the observations, use of archival data, and data reduction strategies are provided. Using these observations, the interstellar detection of propargyl cyanide () is described, as well as the accompanying laboratory spectroscopy. We discuss these results, and the survey project as a whole, in the context of investigating a previously unexplored reservoir of complex, gas-phase molecules in pre-stellar sources. A series of companion papers describe other new astronomical detections and analyses.'\nauthor:\n- 'Brett A. McGuire'\n- 'Andrew M. Burkhardt'\n- 'Ryan A. Loomis'\n- 'Christopher N. Shingledecker'\n- Kin Long Kelvin Lee\n- 'Steven B. Charnley'\n- 'Martin A. Cordiner'\n- Eric Herbst\n- Sergei Kalenskii\n- Emmanuel Momjian\n- 'Eric R. Willis'\n- Ci Xue\n- 'Anthony J. Remijan'\n- 'Michael C. McCarthy'\ntitle: 'Early Science from GOTHAM: Project Overview, Methods, and the Detection of" +"---\nabstract: 'The FIFA Men\u2019s World Cup Tournament (WCT) is the most important football (soccer) competition, attracting worldwide attention. A popular practice among football fans in Brazil is to organize contests in which each participant informs guesses on the final score of each match. The participants are then ranked according to some scoring rule. Inspired by these contests, we created a website to hold an online contest, in which participants were asked for their probabilities on the outcomes of upcoming matches of the WCT. After each round of the tournament, the ranking of all users based on a proper scoring rule were published. This paper studies the performance of some methods intended to extract the [*wisdom of the crowds*]{}, which are aggregated forecasts that uses some or all of the forecasts available. The later methods are compared to simpler forecasting strategies as well as to statistical prediction models. Our findings corroborate the hypothesis that, at least for sporting events, the [*wisdom of the crowds*]{} offers a competitive forecasting strategy. Specifically, some of these strategies were able to achieve high scores in our contest.'\nauthor:\n- |\n Marco In\u00e1cio, Rafael Izbicki, Danilo Lopes,\\\n Luis Ernesto Salasar, Jo\u00e3o Poloniato\\\n and Marcio Alves" +"---\nabstract: 'A focused acoustic standing wave creates a Hookean potential well for a small sphere and can levitate it stably against gravity. Exposing the trapped sphere to a second transverse travelling sound wave imposes an additional acoustical force that drives the sphere away from its mechanical equilibrium. The driving force is shaped by interference between the standing trapping wave and the traveling driving. If, furthermore, the traveling wave is detuned from the standing wave, the driving force oscillates at the difference frequency. Far from behaving like a textbook driven harmonic oscillator, however, the wave-driven harmonic oscillator instead exhibits a remarkably rich variety of dynamical behaviors arising from the spatial dependence of the driving force. These include oscillations at both harmonics and subharmonics of the driving frequency, period-doubling routes to chaos and Fibonacci cascades. This model system therefore illustrates opportunities for dynamic acoustical manipulation based on spectral control of the sound field, rather than spatial control.'\nauthor:\n- 'Mohammed A. Abdelaziz'\n- 'David G. Grier'\ntitle: Dynamics of an acoustically trapped sphere in beating sound waves\n---\n\nAcoustical manipulation is emerging as an attractive alternative to optical manipulation for applications where large forces are required to move sizeable objects over" +"---\nabstract: 'The hadronic light-by-light contribution to the muon anomalous magnetic moment depends on an integration over three off-shell momenta squared ($Q_i^2$) of the correlator of four electromagnetic currents and the fourth leg at zero momentum. We derive the short-distance expansion of this correlator in the limit where all three $Q_i^2$ are large and in the Euclidean domain in QCD. This is done via a systematic operator product expansion (OPE) in a background field which we construct. The leading order term in the expansion is the massless quark loop. We also compute the non-perturbative part of the next-to-leading contribution, which is suppressed by quark masses, and the chiral limit part of the next-to-next-to leading contributions to the OPE. We build a renormalisation program for the OPE. The numerical role of the higher-order contributions is estimated and found to be small.'\n---\n\nLU TP 20-47\\\narXiv:2008.13487\\\nrevised October 2020\n\n[Johan Bijnens$^a$, Nils Hermansson-Truedsson$^b$, Laetitia Laub$^{b}$, Antonio Rodr\u00edguez-S\u00e1nchez$^{a}$]{}\n\n$^a$[*Department of Astronomy and Theoretical Physics, Lund University,*]{}\n\n*S\u00f6lvegatan 14A, SE 223-62 Lund, Sweden*\n\n${}^b$ *Albert Einstein Center for Fundamental Physics, Institute for Theoretical Physics,*\n\n[*Universit\u00e4t Bern, Sidlerstrasse 5, CH\u20133012 Bern, Switzerland*]{}\n\nIntroduction\n============\n\nThe Standard Model (SM) is the theoretical framework developed to" +"---\nabstract: |\n Detection of military assets on the ground can be performed by applying deep learning-based object detectors on drone surveillance footage. The traditional way of hiding military assets from sight is camouflage, for example by using camouflage nets. However, large assets like planes or vessels are difficult to conceal by means of traditional camouflage nets. An alternative type of camouflage is the direct misleading of automatic object detectors. Recently, it has been observed that small adversarial changes applied to images of the object can produce erroneous output by deep learning-based detectors. In particular, adversarial attacks have been successfully demonstrated to prohibit person detections in images, requiring a patch with a specific pattern held up in front of the person, thereby essentially camouflaging the person for the detector. Research into this type of patch attacks is still limited and several questions related to the optimal patch configuration remain open.\n\n This work makes two contributions. First, we apply patch-based adversarial attacks for the use case of unmanned aerial surveillance, where the patch is laid on top of large military assets, camouflaging them from automatic detectors running over the imagery. The patch can prevent automatic detection of the whole object while" +"---\nabstract: 'Algorithms for triangle-finding, the smallest nontrivial instance of the $k$-clique problem, have been proposed for quantum computers. Still, those algorithms assume the use of fixed access time quantum RAM (QRAM). We present a practical gate-based approach to both the triangle-finding problem and its NP-hard k-clique generalization. We examine both constant factors for near-term implementation on a Noisy Intermediate Scale Quantum computer (NISQ) device, and the scaling of the problem to evaluate long-term use of quantum computers. We compare the time complexity and circuit practicality of the theoretical approach and actual implementation. We propose and apply two different strategies to the $k$-clique problem, examining the circuit size of Qiskit implementations. We analyze our implementations by simulating triangle finding with various error models, observing the effect on damping the amplitude of the correct answer, and compare to execution on six real IBMQ machines. Finally, we estimate the date when the methods proposed can run effectively on an actual device based on IBM\u2019s quantum volume exponential growth forecast and the results of our error analysis.'\nauthor:\n- |\n \\\n *Keio University*\\\n Fujisawa, Japan\\\n [sara@sfc.wide.ad.jp]{}\n- |\n \\\n *Nagoya University*\\\n Nagoya, Japan\\\n [legall@math.nagoya-u.ac.jp]{}\n- |\n \\\n *Keio University*\\\n Fujisawa, Japan\\\n [rdv@sfc.wide.ad.jp]{}\ntitle:" +"---\nabstract: 'With the wider availability of sensor technology through easily affordable sensor devices, a number of Structural Health Monitoring (SHM) systems are deployed to monitor vital civil infrastructure. The continuous monitoring provides valuable information about the health of structure that can help in providing a decision support system for retrofits and other structural modifications. However, when the sensors are exposed to harsh environmental conditions, the data measured by the SHM systems tend to be affected by multiple anomalies caused by faulty or broken sensors. Given a deluge of high-dimensional data collected continuously over time, research into using machine learning methods to detect anomalies are a topic of great interest to the SHM community. This paper contributes to this effort by proposing the use of a relatively new time series representation named \u201cShapelet Transform\u201d in combination with a Random Forest classifier to autonomously identify anomalies in SHM data. The shapelet transform is a unique time series representation that is solely based on the shape of the time series data. In consideration of the individual characteristics unique to every anomaly, the application of this transform yields a new shape-based feature representation that can be combined with any standard machine learning algorithm" +"---\nabstract: 'Deep learning (DL)-based models have demonstrated good performance in medical image segmentation. However, the models trained on a known dataset often fail when performed on an unseen dataset collected from different centers, vendors and disease populations. In this work, we present a random style transfer network to tackle the domain generalization problem for multi-vendor and center cardiac image segmentation. Style transfer is used to generate training data with a wider distribution/ heterogeneity, namely domain augmentation. As the target domain could be unknown, we randomly generate a modality vector for the target modality in the style transfer stage, to simulate the domain shift for unknown domains. The model can be trained in a semi-supervised manner by simultaneously optimizing a supervised segmentation and a unsupervised style translation objective. Besides, the framework incorporates the spatial information and shape prior of the target by introducing two regularization terms. We evaluated the proposed framework on 40 subjects from the M&Ms challenge2020, and obtained promising performance in the segmentation for data from unknown vendors and centers.'\nauthor:\n- Lei Li\n- 'Veronika A. Zimmer'\n- Wangbin Ding\n- Fuping Wu\n- Liqin Huang\n- 'Julia A. Schnabel'\n- 'Xiahai Zhuang ${^{(\\textrm{\\Letter})}}$\\'\nbibliography:\n- 'AllBibliography\\_STACOM2020.bib'" +"---\nabstract: 'Corresponding to a hypergraph $G$ with $d$ vertices, a quantum hypergraph state is defined by $\\ket{G} = \\frac{1}{\\sqrt{2^d}}\\sum_{n = 0}^{2^d - 1} (-1)^{f(n)} \\ket{n}$, where $f$ is a $d$-variable Boolean function depending on the hypergraph $G$, and $\\ket{n}$ denotes a binary vector of length $2^d$ with $1$ at $n$-th position for $n = 0, 1, \\dots (2^d - 1)$. The non-classical properties of these states are studied. We consider annihilation and creation operator on the Hilbert space of dimension $2^d$ acting on the number states $\\{\\ket{n}: n = 0, 1, \\dots (2^d - 1)\\}$. The Hermitian number and phase operators, in finite dimensions, are constructed. The number-phase uncertainty for these states leads to the idea of phase squeezing. We establish that these states are squeezed in the phase quadrature only and satisfy the Agarwal-Tara criterion for non-classicality, which only depends on the number of vertices of the hypergraphs. We also point out that coherence is observed in the phase quadrature.'\nauthor:\n- |\n Ramita Sarkar$^1$, Supriyo Dutta$^2$[^1], Subhashish Banerjee$^3$, Prasanta K. Panigrahi$^1$\\\n $^1$ Department of Physical Sciences\\\n Indian Institute of Science Education and Research Kolkata\\\n Mohanpur, Nadia, West Bengal, India - 741246.\\\n $^2$ Centre for Theoretical Studies\\\n Indian" +"---\nabstract: 'Inverse problems in fluid dynamics are ubiquitous in science and engineering, with applications ranging from electronic cooling system design to ocean modeling. We propose a general and robust approach for solving inverse problems in the steady-state Navier-Stokes equations by combining deep neural networks and numerical partial differential equation (PDE) schemes. Our approach expresses numerical simulation as a computational graph with differentiable operators. We then solve inverse problems by constrained optimization, using gradients calculated from the computational graph with reverse-mode automatic differentiation. This technique enables us to model unknown physical properties using deep neural networks and embed them into the PDE model. We demonstrate the effectiveness of our method by computing spatially-varying viscosity and conductivity fields with deep neural networks (DNNs) and training the DNNs using partial observations of velocity fields. We show that the DNNs are capable of modeling complex spatially-varying physical fields with sparse and noisy data. Our implementation leverages the open access ADCME, a library for solving inverse modeling problems in scientific computing using automatic differentiation.'\nauthor:\n- |\n Tiffany Fan,^1^ Kailai Xu,^1^ Jay Pathak,^2^ Eric Darve^1,\\ 3^\\\n ^1^Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA 94305, USA;\\\n {tiffan, kailaix, darve}@stanford.edu\\\n ^2^ Ansys Inc.," +"---\nabstract: |\n A number of recent Molecular Dynamics (MD) simulations have demonstrated that screw dislocations in face centered cubic (fcc) metals can achieve stable steady state motion above the lowest shear wave speed ($v_{\\textrm{shear}}$) which is parallel to their direction of motion (often referred to as transonic motion). This is in direct contrast to classical continuum analyses which predict a divergence in the elastic energy of the host material at a crystal geometry dependent \u2018critical\u2019 velocity $v_{\\textrm{crit}}$. Within this work, we first demonstrate through analytic analyses that the elastic energy of the host material diverges at a dislocation velocity ($v_{\\textrm{crit}}$) which is greater than $v_{\\textrm{shear}}$, i.e. $v_{\\textrm{crit}} > v_{\\textrm{shear}}$. We argue that it is this latter derived velocity ($v_{\\textrm{crit}}$) which separates \u2018subsonic\u2019 and \u2018supersonic\u2019 regimes of dislocation motion in the analytic solution.\n\n In addition to our analyses, we also present a comprehensive suite of MD simulation results of steady state screw dislocation motion for a range of stresses and several cubic metals at both cryogenic and room temperatures. At room temperature, both our independent MD simulations and the earlier works find stable screw dislocation motion only below our derived $v_{\\textrm{crit}}$. Nonetheless, in real-world polycrystalline materials $v_{\\textrm{crit}}$ cannot be interpreted" +"---\nabstract: 'In this work, we introduce *iviz*, a mobile application for visualizing ROS data. In the last few years, the popularity of ROS has grown enormously, making it the standard platform for open source robotic programming. A key reason for this success is the availability of polished, general-purpose modules for many tasks, such as localization, mapping, path planning, and quite importantly, data visualization. However, the availability of the latter is generally restricted to PCs with the Linux operating system. Thus, users that want to see what is happening in the system with a smartphone or a tablet are stuck with solutions such as screen mirroring or using web browser versions of *rviz*, which are difficult to interact with from a mobile interface. More importantly, this makes newer visualization modalities such as Augmented Reality impossible. Our application iviz, based on the Unity engine, addresses these issues by providing a visualization platform designed from scratch to be usable in mobile platforms, such as iOS, Android, and UWP, and including native support for Augmented Reality for all three platforms. If desired, it can also be used in a PC with Linux, Windows, or macOS without any changes.'\nauthor:\n- 'antonio.zea@kit.edu, uwe.hanebeck@kit.edu'\nbibliography:" +"---\nauthor:\n- 'Igor D. Karachentsev, Lidia N. Makarova, R. Brent Tully, Gagandeep S. Anand, Luca Rizzi, and Edward J. Shaya'\ntitle: 'Distance and mass of the M104 (Sombrero) group'\n---\n\n[Distances and radial velocities of galaxies in the vicinity of the luminous early-type galaxy M104 (Sombrero) are used to derive its dark matter mass.]{} [Two dwarf galaxies: UGCA307 and KKSG30 situated near M 104 were observed with the Advanced Camera for Surveys on the Hubble Space Telescope. The distances $9.03^{+0.84}_{-0.51}$ Mpc (UGCA307) and $9.72^{+0.44}_{-0.41}$ Mpc (KKSG30) were determined using the tip of the red giant branch method. These distances are consistent with the dwarf galaxies being satellites of Sombrero.]{} [Using radial velocities and projected separations of UGCA307, KKSG30, and a third galaxy with an accurate distance (KKSG29), as well as 12 other assumed companions with less accurate distances, the total mass of M104 is estimated to be $(1.55\\pm0.49) 10^{13} M_{\\odot}$. At the $K$-band luminosity of the Sombrero galaxy of $2.4 10^{11} L_{\\odot}$, its total mass-to-luminosity ratio is $M_T/L_K = (65\\pm20) M_{\\odot}/L_{\\odot}$, which is about three times higher than that of luminous bulgeless galaxies.]{}\n\nIntroduction\n============\n\nThe Local Volume of the Universe amounts to almost a thousand galaxies having distance" +"---\nabstract: 'We study tree approximations to classical two-body partition functions on sparse and loopy graphs via the Brydges-Kennedy-Abdessalam-Rivasseau forest expansion. We show that for sparse graphs (with large cycles), the partition function above a certain temperature $T^*$ can be approximated by a graph polynomial expansion over forests of the interaction graph. Within this \u201cforest phase\", we show that the approximation can be written in terms of a reference tree $\\mathcal T$ on the interaction graph, with corrections due to cycles. From this point of view, this implies that high-temperature models are easy to solve on sparse graphs, as one can evaluate the partition function using belief propagation. We also show that there exist a high- and low-temperature regime, in which $\\mathcal T$ can be obtained via a maximal spanning tree algorithm on a (given) weighted graph. We study the algebra of these corrections and provide first- and second-order approximation to the tree Ansatz, and give explicit examples for the first-order approximation.'\nauthor:\n- 'F. Caravelli'\ntitle: 'Forest expansion of two-body partition functions for sparse interaction graphs'\n---\n\nIntroduction\n============\n\nThere has been large interest in the study of statistical models on arbitrary graphs ever since Bethe introduced the notion" +"---\nauthor:\n- |\n Ademir Xavier Jr[^1].\\\n Brazilian Space Agency,\\\n Bras\u00edlia, DF - Brazil\ntitle: |\n **The mass-energy relation and\\\n the Doppler shift of a relativistic light source**[^2]\n---\n\n\\\n**Keywords**: Doppler shift, light emission, frequency shift, relativistic source, mass-energy relation.\\\n\nIntroduction\n============\n\nRelativity has changed the way we understand the dynamics of bodies interacting via electromagnetic radiation. In fact, the development of relativity can be seen as an attempt to unify electromagnetism and mechanics [@wittaker]. Since mechanics provided a wide range of applications in the two centuries that followed Newton\u2019s work and therefore was seen as a solid theoretical framework, relativity and its new world view were deep revolutionary steps after their first predictions were confirmed. Such revolution represented the incorporation of electromagnetic laws into our understanding of the mechanical world.\n\nThe historical context of relativity coincides with the downfall of the Ether hypothesis [@wittaker] as an all pervading medium responsible for the propagation of light much in the same way as the air is the medium in which sound waves propagate. The existence of a frequency shift in the light emitted by a moving source was seen both as an evidence of this medium [@wittaker] as of the" +"---\nabstract: 'We consider oblivious transfer protocols performed over binary symmetric channels in a malicious setting where parties will actively cheat if they can. We provide constructions purely based on coding theory that achieve an explicit positive rate, the essential ingredient being the existence of linear codes whose Schur products are asymptotically good.'\nauthor:\n- 'Fr\u00e9d\u00e9rique Oggier [^1]'\n- 'Gilles Z\u00e9mor [^2]'\ntitle: Coding Constructions for Efficient Oblivious Transfer from Noisy Channels\n---\n\nIntroduction\n============\n\nA 1-out-of-2 oblivious transfer is a cryptographic protocol between two players, Alice, who owns two secrets, and Bob, who wishes to acquire one of them. The protocol ensures that one of the secrets is delivered to Bob, while no information about the other secret leaks: furthermore, Alice has no information about which secret Bob selects.\n\n1-out-of-2 oblivious transfer protocols were introduced by Even, Goldreich and Lempel [@EGL], though it was shown that they are equivalent to a variant originally proposed by Rabin [@Rabin]. Oblivious transfer has found numerous applications since, notably to multiparty computation [@Harnik]. Oblivious transfer was first considered in the case of computationally bounded participants, but Cr\u00e9peau and Kilian later [@CK] introduced the idea of unconditionally secure oblivious transfer, by considering the situation" +"---\nabstract: 'Robust topology optimization (RTO) improves the robustness of designs with respect to random sources in real-world structures, yet an accurate sensitivity analysis requires the solution of many systems of equations at each optimization step, leading to a high computational cost. To open up the full potential of RTO under a variety of random sources, this paper presents a momentum-based accelerated mirror descent stochastic approximation (AC-MDSA) approach to efficiently solve RTO problems involving various types of load uncertainties. The proposed framework can perform high-quality design updates with highly noisy stochastic gradients. We reduce the sample size to two (minimum for unbiased variance estimation) and show only two samples are sufficient for evaluating stochastic gradients to obtain robust designs, thus drastically reducing the computational cost. We derive the AC-MDSA update formula based on $\\ell_1$-norm with entropy function, which is tailored to the geometry of the feasible domain. To accelerate and stabilize the algorithm, we integrate a momentum-based acceleration scheme, which also alleviates the step size sensitivity. Several 2D and 3D examples with various sizes are presented to demonstrate the effectiveness and efficiency of the proposed AC-MDSA framework to handle RTO involving various types of loading uncertainties.'\naddress:\n- 'Department of" +"---\nabstract: |\n The multiplicative decomposition model is widely employed for predicting residual stresses and morphologies of biological tissues due to growth. However, it relies on the assumption that the tissue is initially in a stress-free state, which conflicts with the observations that any growth state of a biological tissue is under a significant level of residual stresses that helps to maintain its ideal mechanical conditions. Here, we propose a modified multiplicative decomposition model in which the initial state (or reference configuration) of a biological tissue is endowed with a residual stress instead of being stress-free.\n\n Releasing theoretically the initial residual stress, the initially stressed state is first transmitted into a virtual stress-free state, thus resulting in an initial elastic deformation. The initial virtual stress-free state subsequently grows to another counterpart with a growth deformation, and the latter is further integrated into its natural configuration of a real tissue with an excessive elastic deformation that ensures tissue compatibility. With this decomposition, the total deformation arising during growth may be expressed as the product of elastic deformation, growth deformation and initial elastic deformation, while the corresponding free energy density should depend on the initial residual stress and the total deformation. Three" +"---\nauthor:\n- Giovanni Calice\n- Carlo Sala\n- Daniele Tantari\nbibliography:\n- 'My\\_Biblio.bib'\ntitle: Contingent Convertible Bonds in Financial Networks\n---\n\nIntroduction {#sec: Introduction .unnumbered}\n============\n\nThe 2007-2009 financial crisis highlighted the critical role of interbank interconnectedness in the stability of the global financial system and underscored a network-based approach to develop effective strategies for mitigating financial risk [@battiston2012debtrank; @cimini2015systemic; @somin2020network; @petrone2018dynamic; @boersma2020reducing; @so2022assessing]. Designed to reduce the impact of a lack of short-term liquidity in times of financial distress, Contingent Convertible bonds (henceforth, CoCos) have been extensively issued in the aftermath of the 2008-2009 financial crisis, with the goal of serving as a protective buffer during adverse times. CoCos are coupon-paying bonds that, either convert into equity shares, or are (fully or partially) written-off, when the issuer reaches a pre-specified level of financial distress. Hence, CoCos serve as regulatory instruments designed to absorb unexpected future losses of the issuing bank through automatic recapitalization triggered at a predefined level. This mechanism provides additional loss-absorbing capital to undercapitalized banks during periods when raising fresh equity capital would be challenging.\n\nFirst proposed by [@Merton_1991], who initially described the use of CoCos as a capitalization buffer during economic downturns, the literature on" +"---\nabstract: 'We demonstrate the SciLens News Platform, a novel system for evaluating the quality of news articles. The SciLens News Platform automatically collects contextual information about news articles in real-time and provides quality indicators about their validity and trustworthiness. These quality indicators derive from i) social media discussions regarding news articles, showcasing the reach and stance towards these articles, and ii) their content and their referenced sources, showcasing the journalistic foundations of these articles. Furthermore, the platform enables domain-experts to review articles and rate the quality of news sources. This augmented view of news articles, which combines automatically extracted indicators and domain-expert reviews, has provably helped the platform users to have a better consensus about the quality of the underlying articles. The platform is built in a distributed and robust fashion and runs operationally handling daily thousands of news articles. We evaluate the SciLens News Platform on the emerging topic of *COVID-19* where we highlight the discrepancies between low and high-quality news outlets based on three axes, namely their newsroom activity, evidence seeking and social engagement. A live demonstration of the platform can be found here: **.'\nauthor:\n- 'Angelika Romanou$^\\dagger$ Panayiotis Smeros$^\\dagger$ Carlos Castillo$^\\ddagger$ Karl Aberer$^\\dagger$'\n- |" +"---\naddress:\n- ', , '\n- ', , '\nauthor:\n- 'Ioannis Kordonis\\*'\n- 'Athanasios-Rafail Lagos'\n- 'George P. \u00a0Papavassilopoulos'\nbibliography:\n- 'refs3.bib'\nnocite: '[@*]'\ntitle: 'Nash Social Distancing Games with Equity Constraints: How Inequality Aversion Affects the Spread of Epidemics[^1]'\n---\n\nIntroduction\n============\n\nEpidemics harass humanity for centuries, and people investigate several strategies to contain them. The development of medicines and vaccines and the evolution of healthcare systems with specialized personnel and equipped hospitals have significantly affected the spread of many epidemics and have even eliminated some contagious diseases. However, during the current COVID-19 pandemic, due to the lack or scarcity of appropriate medicines and vaccines, Non-Pharmaceutical Interventions (primarily social distancing) have been among the most effective strategies to reduce the disease spread. Due to the slow roll-out of the vaccines, their uneven distribution, the emergence of SARS-CoV-2 variants, age limitations, and people\u2019s resistance to vaccination, social distancing is likely to remain significant in a large part of the globe for the near future.\n\nEpidemiological models are essential in designing measures and strategies to control epidemics[^2]. In the last century, epidemiologists have made significant progress in the mathematical modeling of the spread of epidemics. From the seminal works" +"---\nabstract: 'The discovery of a luminous radio burst, [FRB 200428]{}, with properties similar to those of fast radio bursts (FRB), in coincidence with an X-ray flare from the Galactic magnetar SGR 1935+2154, supports magnetar models for cosmological FRBs. The burst\u2019s X-ray to radio fluence ratio, as well as the X-ray spectral shape and peak energy, are consistent with [FRB 200428]{}\u00a0being the result of an ultra-relativistic shock (powered, e.g., by an ejected plasmoid) propagating into a magnetized baryon-rich external medium; the shock simultaneously generates X-ray/gamma-rays via thermal synchrotron emission from electrons heated behind the shock, and coherent radio emission via the synchrotron maser mechanism. Here, we point out that a unique consequence of this baryon-loaded shock scenario is the generation of a coincident burst of high-energy neutrinos, generated by photo-hadronic interaction of relativistic ions$-$heated or accelerated at the shock$-$with thermal synchrotron photons. We estimate the properties of these neutrino burst FRB counterparts and find that a fraction $\\sim 10^{-8}-10^{-5}$ of the flare energy (or $\\sim 10^{-4}-10^{-1}$ of the radio isotropic energy) is channeled into production of neutrinos with typical energies $\\sim$ TeV$-$PeV. We conclude by discussing prospects for detecting this signal with IceCube and future high-energy neutrino detectors.'\nauthor:" +"---\nabstract: 'The unique features of quantum theory offer a powerful new paradigm for information processing. Translating these mathematical abstractions into useful algorithms and applications requires quantum systems with significant complexity and sufficiently low error rates. Such quantum systems must be made from robust hardware that can coherently store, process, and extract the encoded information, as well as possess effective quantum error correction (QEC) protocols to detect and correct errors. Circuit quantum electrodynamics (cQED) provides a promising hardware platform for implementing robust quantum devices. In particular, bosonic encodings in cQED that use multi-photon states of superconducting cavities to encode information have shown success in realizing hardware-efficient QEC. Here, we review recent developments in the theory and implementation of quantum error correction with bosonic codes and report the progress made towards realizing fault-tolerant quantum information processing with cQED devices.'\nauthor:\n- Atharv Joshi\n- Kyungjoo Noh\n- 'Yvonne Y. Gao'\nbibliography:\n- 'references.bib'\ntitle: Quantum Information Processing with Bosonic Qubits in Circuit QED\n---\n\n[^1]\n\nIntroduction\n============\n\nA quantum computer harnesses unique features of quantum theory, such as superposition and entanglement, to tackle classically challenging tasks. To perform faithful computation, quantum information must be protected against errors due to decoherence mechanisms" +"---\nabstract: 'We show that Whitham type equations $u_t + u u_x -\\mathcal{L} u_x = 0$, where $L$ is a general Fourier multiplier operator of order $\\alpha \\in [-1,1]$, $\\alpha\\neq 0$, allow for small solutions to be extended beyond their ordinary existence time. The result is valid for a range of quadratic dispersive equations with inhomogenous symbols in the dispersive regime given by the parameter $\\alpha$.'\naddress:\n- 'Department of Mathematical Sciences, NTNU Norwegian University of Science and Technology, 7491 Trondheim, Norway.'\n- 'School of Mathematics and Statistics, Lanzhou University, 370000 Lanzhou, People\u2019s Republic of China.'\n- 'Universit'' e Paris-Saclay, CNRS, Laboratoire de Math\u00e9matiques d\u2019Orsay, 91405 Orsay, France.'\nauthor:\n- Mats Ehrnstr\u00f6m\n- Yuexun Wang\ntitle: Enhanced existence time of solutions to evolution equations of Whitham type\n---\n\n[^1]\n\nIntroduction\n============\n\nThe enhanced existence time of small solutions to weakly dispersive water wave equations has gained a lot of attention. Going back to Shatah\u2019s normal form [@MR803256] and the subsequent work of Delort and collaborators on the Klein\u2013Gordon equation [@MR2056326], it has obtained renewed momentum both through the work on the Burgers\u2013Hilbert equation [@MR2982741], but more generally through the analysis related to global well-posedness for the water-wave problem [@MR3460636; @MR2507638;" +"---\nabstract: 'It is shown that chiral plasmons, characterized by a longitudinal magnetic moment accompanying the longitudinal charge plasmon, lead to electromagnetic near-fields that are also chiral. For twisted bilayer graphene, we estimate that the near field chirality of screened plasmons can be several orders of magnitude larger than that of the related circularly polarized light. The chirality also manifests itself in a deflection angle that is formed between the direction of the plasmon propagation and its Poynting vector. Twisted van der Waals heterostructures might thus provide a novel platform to promote enantiomer-selective physio-chemical processes in chiral molecules without the application of a magnetic field or external nano-patterning that break time-reversal, mirror plane or inversion symmetry, respectively.'\nauthor:\n- 'T. Stauber'\n- 'T. Low'\n- 'G. G\u00f3mez-Santos'\nbibliography:\n- 'Chirality2.bib'\ntitle: 'Plasmon-enhanced near-field chirality in twisted van der Waals heterostructures'\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nChirality is an important aspect in life as only one enantiomer of amino acids is present in nature.[@Barron04] Furthermore, chiral objects can only be distinguished through the interaction with other chiral objects. A prominent example is the circular dichroism of chiral molecules where a different absorption cross section is seen when changing the chirality of" +"---\nabstract: 'Node representation learning for signed directed networks has received considerable attention in many real-world applications such as link sign prediction, node classification and node recommendation. The challenge lies in how to adequately encode the complex topological information of the networks. Recent studies mainly focus on preserving the *first-order* network topology which indicates the closeness relationships of nodes. However, these methods generally fail to capture the *high-order* topology which indicates the local structures of nodes and serves as an essential characteristic of the network topology. In addition, for the *first-order* topology, the additional value of non-existent links is largely ignored. In this paper, we propose to learn more representative node embeddings by simultaneously capturing the *first-order* and *high-order* topology in signed directed networks. In particular, we reformulate the representation learning problem on signed directed networks from a variational auto-encoding perspective and further develop a decoupled variational embedding (DVE) method. DVE leverages a specially designed auto-encoder structure to capture both the *first-order* and *high-order* topology of signed directed networks, and thus learns more representative node embeddings. Extensive experiments are conducted on three widely used real-world datasets. Comprehensive results on both link sign prediction and node recommendation task demonstrate the effectiveness" +"---\nabstract: |\n Improving the resistance of deep neural networks against adversarial attacks is important for deploying models to realistic applications. However, most defense methods are designed to defend against intensity perturbations and ignore location perturbations, which should be equally important for deep model security. In this paper, we focus on adversarial deformations, a typical class of location perturbations, and propose a flow gradient regularization to improve the resistance of models. Theoretically, we prove that, compared with input gradient regularization, regularizing flow gradients is able to get a tighter bound.\n\n Over multiple datasets, architectures, and adversarial deformations, our empirical results indicate that models trained with flow gradients can acquire a better resistance than trained with input gradients with a large margin, and also better than adversarial training. Moreover, compared with directly training with adversarial deformations, our method can achieve better results in unseen attacks, and combining these two methods can improve the resistance further.\nauthor:\n- Pengfei Xia\n- Bin Li\nbibliography:\n- './references.bib'\ntitle: Improving Resistance to Adversarial Deformations by Regularizing Gradients\n---\n\nIntroduction\n============\n\nDeep neural networks (DNNs), especially convolutional neural networks (CNNs), have achieved remarkable success in computer vision tasks [@krizhevsky2012imagenet; @simonyan2014very; @girshick2015fast; @long2015fully; @xia2020boosting]. However, small," +"---\nabstract: 'The elastohydrodynamics of slender bodies in a viscous fluid have long been the source of theoretical investigation, being pertinent to the microscale world of ciliates and flagellates as well as to biological and engineered active matter more generally. Though recent works have overcome the severe numerical stiffness typically associated with slender elastohydrodynamics, employing both local and non-local couplings to the surrounding fluid, there is no framework of comparable efficiency that rigorously justifies its hydrodynamic accuracy. In this study, we combine developments in filament elastohydrodynamics with a recent slender-body theory, affording algebraic asymptotic accuracy to the commonly imposed no-slip condition on the surface of a slender filament of potentially non-uniform cross-sectional radius. Further, we do this whilst retaining the remarkable practical efficiency of contemporary elastohydrodynamic approaches, having drawn inspiration from the method of regularised Stokeslet segments to yield an efficient and flexible slender-body theory of regularised non-uniform segments.'\nauthor:\n- 'B. J. Walker'\n- 'E. A. Gaffney'\ntitle: 'Regularised non-uniform segments and efficient no-slip elastohydrodynamics'\n---\n\nIntroduction {#sec:intro}\n============\n\nThe coupled elastohydrodynamics of flexible slender filaments are of intense interest to a breadth of active research communities, ranging from theoretical to experimental studies of filaments from the perspectives of" +"---\nabstract: |\n Suppose that a countably $n$-rectifiable set $\\Gamma_0$ is the support of a multiplicity-one stationary varifold in ${\\mathbb{R}}^{n+1}$ with a point admitting a flat tangent plane $T$ of density $Q \\ge 2$. We prove that, under a suitable assumption on the decay rate of the blow-ups of $\\Gamma_0$ towards $T$, there exists a *non-constant* Brakke flow starting with $\\Gamma_0$. This shows non-uniqueness of Brakke flow under these conditions, and suggests that the stability of a stationary varifold with respect to mean curvature flow may be used to exclude the presence of flat singularities.\\\n Keywords: mean curvature flow, varifolds, singularities of minimal surfaces.\\\n AMS Math Subject Classification (2020): 53E10 (primary), 49Q05.\naddress:\n- 'Department of Mathematics, The University of Texas at Austin, 2515 Speedway, Stop C1200, Austin TX 78712-1202, United States of America'\n- 'Department of Mathematics, Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, Tokyo 152-8551, Japan'\nauthor:\n- Salvatore Stuvard\n- Yoshihiro Tonegawa\nbibliography:\n- 'MCF\\_Plateau\\_biblio.bib'\ntitle: |\n Dynamical instability of minimal surfaces\\\n at flat singular points\n---\n\nIntroduction\n============\n\nA family of surfaces is said to move by mean curvature flow (abbreviated hereafter as MCF) if the velocity of motion is equal to the mean" +"---\nabstract: 'We examine continuous-variable gate teleportation using entangled states made from pure product states sent through a beam splitter. We show that such states are Choi states for a (typically) non-unitary gate, and we derive the associated Kraus operator for teleportation, which can be used to realize non-Gaussian, non-unitary quantum operations on an input state. With this result, we show how gate teleportation is used to perform error correction on bosonic qubits encoded using the Gottesman-Kitaev-Preskill code. This result is presented in the context of deterministically produced macronode cluster states, generated by constant-depth linear optical networks, supplemented with a probabilistic supply of GKP states. The upshot of our technique is that state injection for both gate teleportation and error correction can be achieved without active squeezing operations\u2014an experimental bottleneck for quantum optical implementations.'\nauthor:\n- 'Blayney W. Walshe'\n- 'Ben Q. Baragiola'\n- 'Rafael N. Alexander'\n- 'Nicolas C. Menicucci'\nbibliography:\n- 'ReferencesMaster.bib'\n- 'rafref.bib'\ntitle: 'Continuous-variable gate teleportation and bosonic-code error correction'\n---\n\nIntroduction {#scrivauto:12}\n============\n\nRecent strides in the experimental generation of continuous-variable (CV) cluster states [@Yokoyama2013; @Asavanant2019; @Larsen2019] prove that the use of CV measurement-based quantum computing (MBQC) is one of the most promising methods of" +"---\nabstract: 'Automatic speech recognition (ASR) in multimedia content is one of the promising applications, but speech data in this kind of content are frequently mixed with background music, which is harmful for the performance of ASR. In this study, we propose a method for improving ASR with background music based on time-domain source separation. We utilize Conv-TasNet as a separation network, which has achieved state-of-the-art performance for multi-speaker source separation, to extract the speech signal from a speech-music mixture in the waveform domain. We also propose joint fine-tuning of a pre-trained Conv-TasNet front-end with an attention-based ASR back-end using both separation and ASR objectives. We evaluated our method through ASR experiments using speech data mixed with background music from a wide variety of Japanese animations. We show that time-domain speech-music separation drastically improves ASR performance of the back-end model trained with mixture data, and the joint optimization yielded a further significant WER reduction. The time-domain separation method outperformed a frequency-domain separation method, which reuses the phase information of the input mixture signal, both in simple cascading and joint training settings. We also demonstrate that our method works robustly for music interference from classical, jazz and popular genres.'\nauthor:\n-" +"---\nabstract: 'Information spreading social media platforms has become ubiquitous in our lives due to viral information propagation regardless of its veracity. Some information cascades turn out to be viral since they circulated rapidly on the Internet. The uncontrollable virality of manipulated or disorientated true information (fake news) might be quite harmful, while the spread of the true news is advantageous, especially in emergencies. We tackle the problem of predicting information cascades by presenting a novel variant of SEIZ (Susceptible/ Exposed/ Infected/ Skeptics) model that outperforms the original version by taking into account the cognitive processing depth of users. We define an information cascade as the set of social media users\u2019 reactions to the original content which requires at least minimal physical and cognitive effort; therefore, we considered retweet/ reply/ quote (mention) activities and tested our framework on the Syrian White Helmets Twitter data set from April 1st, 2018 to April 30th, 2019. In the prediction of cascade pattern via traditional compartmental models, all the activities are grouped, and their summation is taken into account; however, transition rates between compartments should vary according to the activity type since their requirements of physical and cognitive efforts are not same. Based on" +"---\nabstract: 'Online social media has become an important platform to organize around different socio-cultural and political topics. An extensive scholarship has discussed how people are divided into echo-chamber-like groups. However, there is a lack of work related to quantifying hostile communication or *affective polarization* between two competing groups. This paper proposes a systematic, network-based methodology for examining affective polarization in online conversations. Further, we apply our framework to 100 weeks of Twitter discourse about climate change. We find that deniers of climate change (Disbelievers) are more hostile towards people who believe (Believers) in the anthropogenic cause of climate change than vice versa. Moreover, Disbelievers use more words and hashtags related to natural disasters during more hostile weeks as compared to Believers. These findings bear implications for studying affective polarization in online discourse, especially concerning the subject of climate change. Lastly, we discuss our findings in the context of increasingly important climate change communication research.'\nauthor:\n- \n- \n- \nbibliography:\n- 'references.bib'\ntitle: |\n Affective Polarization in Online Climate Change Discourse on Twitter\\\n [^1] \n---\n\nclimate change, affective polarization, stance detection, online social networks\n\nIntroduction\n============\n\nOnline social networks represent a powerful space for public discourse. Through large-scale, interconnected platforms" +"---\nabstract: |\n Recoverable robust optimization is a multi-stage approach, in which it is possible to adjust a first-stage solution after the uncertain cost scenario is revealed. We analyze this approach for a class of selection problems. The aim is to choose a fixed number of items from several disjoint sets, such that the worst-case costs after taking a recovery action are as small as possible. The uncertainty is modeled as a discrete budgeted set, where the adversary can increase the costs of a fixed number of items.\n\n While special cases of this problem have been studied before, its complexity has remained open. In this work we make several contributions towards closing this gap. We show that the problem is NP-hard and identify a special case that remains solvable in polynomial time. We provide a compact mixed-integer programming formulation and two additional extended formulations. Finally, computational results are provided that compare the efficiency of different exact solution approaches.\nauthor:\n- 'Marc Goerigk[^1]'\n- 'Stefan Lendl[^2]'\n- 'Lasse Wulf[^3]'\ntitle: Recoverable Robust Representatives Selection Problems with Discrete Budgeted Uncertainty\n---\n\n**Keywords:** robustness and sensitivity analysis; robust optimization; discrete budgeted uncertainty; combinatorial optimization; selection problems\n\n**Funding:** This work was supported by the" +"---\nabstract: 'As data generation increasingly takes place on devices without a wired connection, machine learning (ML) related traffic will be ubiquitous in wireless networks. Many studies have shown that traditional wireless protocols are highly inefficient or unsustainable to support ML, which creates the need for new wireless communication methods. In this survey, we give an exhaustive review of the state-of-the-art wireless methods that are specifically designed to support ML services over distributed datasets. Currently, there are two clear themes within the literature, analog over-the-air computation and digital radio resource management optimized for ML. This survey gives a comprehensive introduction to these methods, reviews the most important works, highlights open problems, and discusses application scenarios.'\nauthor:\n- 'Hellstr\u00f6m,Henrik'\n- 'Barros da Silva Jr.,Jos\u00e9 Mairton'\n- 'Amiri,Mohammad\u00a0Mohammadi'\n- 'Chen,Mingzhe'\n- 'Fodor,Viktoria'\n- 'Poor,H.\u00a0Vincent'\n- 'Fischione,Carlo'\nbibliography:\n- 'sample-now.bib'\ntitle: |\n Wireless for Machine Learning:\\\n a Survey\n---\n\nIntroduction {#sec:introduction}\n============\n\nWith the increasing popularity of mobile devices and the continuous growth of , we are having increasing access to vast amounts of distributed data. According to a recent report from Ericsson, the global number of connected devices will rise to 4.1 billion by 2024\u00a0[@ericssonmobility], which is four times" +"---\nauthor:\n- 'Minxian\u00a0Xu,\u00a0 Adel N. Toosi,\u00a0 and\u00a0Rajkumar Buyya,\u00a0 [^1]'\nbibliography:\n- 'library.bib'\ntitle: 'A Self-adaptive Approach for Managing Applications and Harnessing Renewable Energy for Sustainable Cloud Computing'\n---\n\n[Shell : Bare Demo of IEEEtran.cls for Journals]{}\n\nIntroduction {#sec:introdution}\n============\n\nToday\u2019s society and its organizations are becoming ever-increasingly dependent upon information and communication technologies (ICT) with software systems, especially web systems, largely hosted on cloud data centers. Clouds offer an exciting benefit to enterprises by removing the need for building own Information Technology (IT) infrastructures and shifting the focus from the IT and infrastructure issues to core business competence. Apart from the infrastructure, elasticity, availability, and pay-as-you-go pricing model are among many other reasons which led to the rise of cloud computing [@Kilcioglu2017]. This massive growth in cloud solutions demanded the establishment of huge number of data centers around the world owned by enterprises and large cloud service providers such as Amazon, Microsoft, and Google to offer their services [@Chen2019WWW].\n\nHowever, data centers hosting cloud services consume a large amount of electricity leading to high operational costs and high carbon footprint on the environment [@Jiang2019ISCA]. ICT sector nowadays consumes approximately 7% of the global electricity, and it is" +"---\nabstract: 'The Collatz conjecture holds that repeated conversions $n \\to n/2$ of even numbers, and $n\\to 3n+1$ of odd numbers define root paths to the \u2018trivial\u2019 cyclic root $4\\to 2\\to 1\\to 4\\to \\dots $ in one connected tree graph. Shortening paths of arrows from non-branching numbers to branching numbers into arrows yields a strictly binary graph, labeled $T_{\\ge 0}$. Each branching number has an upward child $U:n \\to 2n\\cdot2^{p-1}$ and a leftward child $L:n \\to (n-1)/3 \\cdot 2^q$. Graph $T_{\\ge 0}$ is an infinite automorphic graph that can be turned into disjoint same-generation cotrees, each containing one generation of either $U$pward successors of $L$eftward numbers or of $L$eftward successors of $U$pward numbers. Modular arithmetic reveals the powers $p$ and $q$, and also the number sets of $T_{\\ge 0}$ and each of its disjoint cotrees. Every branching number except the trivial root number $c=4$ is contained in one of the cotrees. The cyclic roots of all cotrees are in graph $T_{\\ge 0}$ numbers on a root path connected to the trivial root. Thus, all branching numbers, and all non-branching numbers on paths to them, have a root path to the trivial cyclic root.'\nauthor:\n- Jan Kleinnijenhuis\n- 'Alissa M." +"---\nabstract: 'There exists an urgent need for efficient tools in disease surveillance to help model and predict the spread of disease. The transmission of insect-borne diseases poses a serious concern to public health officials and the medical and research community at large. In the modeling of this spread, we face bottlenecks in (1) the frequency at which we are able to sample insect vectors in environments that are prone to propagating disease, (2) manual labor needed to set up and retrieve surveillance devices like traps, and (3) the return time in analyzing insect samples and determining if an infectious disease is spreading in a region. To help address these bottlenecks, we present in this paper the design, fabrication, and testing of a novel automated insect capture module (ICM) or trap that aims to improve the rate of transferring samples collected from the environment via aerial robots. The ICM features an ultraviolet light attractant, passive capture mechanism, panels which can open and close for access to insects, and a small onboard computer for automated operation and data logging. At the same time, the ICM is designed to be accessible; it is small-scale, lightweight and low-cost, and can be integrated with" +"---\nabstract: 'By a well-known theorem of Viterbo, the symplectic homology of the cotangent bundle of a closed manifold is isomorphic to the homology of its loop space. In this paper we extend the scope of this isomorphism in several directions. First, we give a direct definition of [*Rabinowitz loop homology*]{} in terms of Morse theory on the loop space and prove that its product agrees with the pair-of-pants product on Rabinowitz Floer homology. The proof uses compactified moduli spaces of punctured annuli. Second, we prove that, when restricted to [*positive*]{} Floer homology, resp.\u00a0loop space homology relative to the constant loops, the Viterbo isomorphism intertwines various constructions of secondary pair-of-pants coproducts with the loop homology coproduct. Third, we introduce [*reduced loop homology*]{}, which is a common domain of definition for a canonical reduction of the loop product and for extensions of the loop homology coproduct which together define the structure of a commutative cocommutative unital infinitesimal anti-symmetric bialgebra. Along the way, we show that the Abbondandolo-Schwarz quasi-isomorphism going from the Floer complex of quadratic Hamiltonians to the Morse complex of the energy functional can be turned into a filtered chain isomorphism by using linear Hamiltonians and the square root" +"---\nabstract: 'The geometry of the Universe may be probed using the Alcock-Paczy\u0144ski (AP) effect, in which the observed redshift size of a spherical distribution of sources relative to its angular size varies according to the assumed cosmological model. Past applications of this effect have been limited, however, by a paucity of suitable sources and mitigating astrophysical factors, such as internal redshift-space distortions and poorly known source evolution. In this [*Letter*]{}, we introduce a new test based on the AP effect that avoids the use of spatially bound systems, relying instead on sub-samples of quasars at redshifts $z\\lesssim 1.5$ in the Sloan Digital Sky Survey IV, with a possible extension to higher redshifts and improved precision when this catalog is expanded by upcoming surveys. We here use this method to probe the redshift-dependent expansion rate in three pertinent Friedmann-Lema\u00eetre-Robertson-Walker (FLRW) cosmologies: $\\Lambda$CDM, which predicts a transition from deceleration to acceleration at $z\\sim 0.7$; Einstein-de Sitter, in which the Universe is always decelerating; and the $R_{\\rm h}=ct$ universe, which expands at a constant rate. $\\Lambda$CDM is consistent with these data, but $R_{\\rm h}=ct$ is favoured overall.'\nauthor:\n- |\n Fulvio Melia$^{1}$[^1], [Jin Qin$^2$]{} and [Tong-Jie Zhang$^2$]{}\\\n $^1$Department of Physics, The Applied" +"---\nabstract: |\n The world has transitioned into a new phase of online learning in response to the recent Covid19 pandemic. Now more than ever, it has become paramount to push the limits of online learning in every manner to keep flourishing the education system. One crucial component of online learning is Knowledge Tracing (KT). The aim of KT is to model student\u2019s knowledge level based on their answers to a sequence of exercises referred as interactions. Students acquire their skills while solving exercises and each such interaction has a distinct impact on student ability to solve a future exercise. This *impact* is characterized by 1) the relation between exercises involved in the interactions and 2) student forget behavior. Traditional studies on knowledge tracing do not explicitly model both the components jointly to estimate the impact of these interactions.\n\n In this paper, we propose a novel Relation-aware self-attention model for Knowledge Tracing (RKT). We introduce a relation-aware self-attention layer that incorporates the contextual information. This contextual information integrates both the exercise relation information through their textual content as well as student performance data and the forget behavior information through modeling an exponentially decaying kernel function. Extensive experiments on three real-world" +"---\nabstract: 'We examine whether the LHCb vector $ud\\bar{c}\\bar{s}$ state $X(2900)$ can be interpreted as a kinematical cusp effect arising from $\\bar D^*K^*$ and $\\bar D_1 K\\*$ interactions. The production amplitude is modelled as a triangle diagram with hadronic final state interactions. A satisfactory fit to the Dalitz plot projection is obtained that leverages the singularities of the production diagram without the need for $\\bar{D}K$ resonances. A somewhat better fit is obtained if the final state interactions are strong enough to generate resonances, although the evidence in favour of this scenario is not conclusive.'\nauthor:\n- 'T.J. Burns'\n- 'E.S. Swanson'\ntitle: 'Kinematical Cusp and Resonance Interpretations of the $X(2900)$'\n---\n\n|u \\#1[[\\#1]{}]{} \\#1[[\\#1]{}]{}\n\nIntroduction\n============\n\nThe LHCb collaboration has announced the discovery of a $\\bar D K$ enhancement in the reaction $B\\to D\\bar D K$ that can be interpreted as Breit-Wigner resonances with parameters\u00a0[@LHCbX]: $$\\begin{aligned}\nX_0(2866)&;\\, J^P = 0^+, \\quad &&M = 2866.3 \\pm 6.5 \\pm 2 \\MeV,\\ &&\\Gamma = 57.2 \\pm 12.2 \\pm 4.1\\MeV,\\ &&\\textrm{fit fraction} = 6\\%,\\\\\nX_1(2904)&;\\, J^P =1^-, \\quad &&M = 2904.1 \\pm 4.8 \\pm 1.3 \\MeV,\\ &&\\Gamma = 110.3 \\pm 10.7 \\pm 4.3 \\MeV,\\ &&\\textrm{fit fraction} = 31\\%.\\end{aligned}$$ The discovery adds to a" +"---\nabstract: 'We differentiate non-extremal black hole, *extremal* black hole and *naked singularity* via metric perturbations for Reissner-Nordstr\u00f6m spacetime. First we study the axial perturbations for *extremal* Reissner-Nordstr\u00f6m black hole and compute the effective potential due to these perturbations. Then we study the axial perturbations for the naked singularity case and compute the effective potential. We show that for the non-extremal black hole, *the effective potential outside the event horizon\u00a0($r_{+}$) is real and positive. While in between Cauchy horizon\u00a0($r_{-}$) and event horizon\u00a0($r_{-} 5000$d) at around 3 GHz, which mimics the properties of the central absorbed component seen in SN 1986J." +"---\nabstract: 'In this work we present a method to adaptively compensate for scale factor errors in both rotational velocity and seeker angle measurements. The adaptation scheme estimates the scale factor errors using a predictive coding model implemented as a deep neural network with recurrent layer, and then uses these estimates to compensate for the error. During training, the model learns over a wide range of scale factor errors that ideally bound the expected errors that can occur during deployment, allowing the deployed model to quickly adapt in real time to the ground truth error. We demonstrate in a realistic six degrees-of-freedom simulation of an exoatmospheric intercept that our method effectively compensates for concurrent rotational velocity and seeker angle scale factor errors. The compensation method is general in that it is independent of a given guidance, navigation, and control system implementation. Although demonstrated using an exoatmospheric missile with strapdown seeker, the method is also applicable to endoatmospheric missiles with both gimbaled and strapdown seekers, as well as general purpose inertial measurement unit rate gyro compensation.'\nauthor:\n- 'Brian Gaudet[^1]'\nbibliography:\n- 'references.bib'\ntitle: Adaptive Scale Factor Compensation for Missiles with Strapdown Seekers via Predictive Coding\n---\n\nIntroduction\n============\n\nfactor measurement" +"---\nabstract: 'Aiming at better representing multivariate relationships, this paper investigates a motif dimensional framework for higher-order graph learning. The graph learning effectiveness can be improved through OFFER. The proposed framework mainly aims at accelerating and improving higher-order graph learning results. We apply the acceleration procedure from the dimensional of network motifs. Specifically, the refined degree for nodes and edges are conducted in two stages: (1) employ motif degree of nodes to refine the adjacency matrix of the network; and (2) employ motif degree of edges to refine the transition probability matrix in the learning process. In order to assess the efficiency of the proposed framework, four popular network representation algorithms are modified and examined. By evaluating the performance of OFFER, both link prediction results and clustering results demonstrate that the graph representation learning algorithms enhanced with OFFER consistently outperform the original algorithms with higher efficiency.'\nauthor:\n- Shuo Yu\n- Feng Xia\n- Jin Xu\n- Zhikui Chen\n- Ivan Lee\nbibliography:\n- 'References.bib'\ntitle: 'OFFER: A Motif Dimensional Framework for Network Representation Learning'\n---\n\n<ccs2012> <concept> <concept\\_id>10002950</concept\\_id> <concept\\_desc>Mathematics of computing</concept\\_desc> <concept\\_significance>300</concept\\_significance> </concept> <concept> <concept\\_id>10010147.10010178.10010187</concept\\_id> <concept\\_desc>Computing methodologies\u00a0Knowledge representation and reasoning</concept\\_desc> <concept\\_significance>300</concept\\_significance> </concept> <concept> <concept\\_id>10010147.10010257</concept\\_id> <concept\\_desc>Computing methodologies\u00a0Machine learning</concept\\_desc>" +"---\nabstract: 'Total generalization variation (TGV) is a very powerful and important regularization for various inverse problems and computer vision tasks. In this paper, we propose a semismooth Newton based augmented Lagrangian method for solving this problem. The augmented Lagrangian method (also called as method of multipliers) is widely used for lots of smooth or nonsmooth variational problems. However, its efficiency heavily depends on solving the corresponding coupled and nonlinear system together and simultaneously. With efficient primal-dual semismooth Newton methods for the challenging and highly coupled nonlinear subproblems involving total generalized variation, we develop a highly efficient and competitive augmented Lagrangian method compared with some fast first-order method. With the analysis of the metric subregularities of the corresponding functions, we give both the global convergence and local linear convergence rate for the proposed augmented Lagrangian methods.'\nauthor:\n- 'Hongpeng Sun[^1]'\ntitle: An Efficient Augmented Lagrangian Method with Semismooth Newton Solver for Total Generalized Variation\n---\n\n#### Key words.\n\n[Augmented Lagrangian method, primal-dual semismooth Newton method, local linear convergence rate, metric subregularity]{}\n\n#### AMS subject classifications.\n\n65K10, 49J52, 49M15\n\nIntroduction\n============\n\nTotal generalized variation (TGV) is an important regularization and image prior to various applications including medical imaging, computer vision, tomography," +"---\nabstract: 'A recent spate of state-of-the-art semi- and un-supervised solutions disentangle and encode image \u201ccontent\u201d into a spatial tensor and image appearance or \u201cstyle\u201d into a vector, to achieve good performance in spatially equivariant tasks (*e.g.* image-to-image translation). To achieve this, they employ different model design, learning objective, and data biases. While considerable effort has been made to measure disentanglement in vector representations, and assess its impact on task performance, such analysis for (spatial) content - style disentanglement is lacking. In this paper, we conduct an empirical study to investigate the role of different biases in content-style disentanglement settings and unveil the relationship between the degree of disentanglement and task performance. In particular, we consider the setting where we: (i) identify key design choices and learning constraints for three popular content-style disentanglement models; (ii) relax or remove such constraints in an ablation fashion; and (iii) use two metrics to measure the degree of disentanglement and assess its effect on each task performance. Our experiments reveal that there is a \u201csweet spot\u201d between disentanglement, task performance and - surprisingly \u2013 content interpretability, suggesting that blindly forcing for higher disentanglement can hurt model performance and content factors semanticness. Our findings, as" +"---\nabstract: 'Vision is the richest and most cost-effective technology for Driver Monitoring Systems (DMS), especially after the recent success of Deep Learning (DL) methods. The lack of sufficiently large and comprehensive datasets is currently a bottleneck for the progress of DMS development, crucial for the transition of automated driving from SAE Level-2 to SAE Level-3. In this paper, we introduce the *Driver Monitoring Dataset (DMD)*, an extensive dataset which includes real and simulated driving scenarios: distraction, gaze allocation, drowsiness, hands-wheel interaction and context data, in 41 hours of RGB, depth and IR videos from 3 cameras capturing face, body and hands of 37 drivers. A comparison with existing similar datasets is included, which shows the DMD is more extensive, diverse, and multi-purpose. The usage of the DMD is illustrated by extracting a subset of it, the *dBehaviourMD* dataset, containing 13 distraction activities, prepared to be used in DL training processes. Furthermore, we propose a robust and real-time driver behaviour recognition system targeting a real-world application that can run on cost-efficient CPU-only platforms, based on the *dBehaviourMD*. Its performance is evaluated with different types of fusion strategies, which all reach enhanced accuracy still providing real-time response.'\nauthor:\n- '[Juan Diego]{}" +"---\nabstract: 'Information theory has become an increasingly important research field to better understand quantum mechanics. Noteworthy, it covers both foundational and applied perspectives, also offering a common technical language to study a variety of research areas. Remarkably, one of the key information-theoretic quantities is given by the relative entropy, which quantifies how difficult is to tell apart two probability distributions, or even two quantum states. Such a quantity rests at the core of fields like metrology, quantum thermodynamics, quantum communication and quantum information. Given this broadness of applications, it is desirable to understand how this quantity changes under a quantum process. By considering a general unitary channel, we establish a bound on the generalized relative entropies (R\u00e9nyi and Tsallis) between the output and the input of the channel. As an application of our bounds, we derive a family of quantum speed limits based on relative entropies. Possible connections between this family with thermodynamics, quantum coherence, asymmetry and single-shot information theory are briefly discussed.'\nauthor:\n- Diego Paiva Pires\n- Kavan Modi\n- Lucas Chibebe C\u00e9leri\ntitle: 'Bounding generalized relative entropies: Nonasymptotic quantum speed limits'\n---\n\nIntroduction {#sec:introd000_xxx_0001}\n============\n\nSince its formulation decades ago by Shannon\u00a0[@Shannon1948], information theory has" +"---\nabstract: 'The scalar induced gravitational waves (SIGWs) is a useful tool to probe the physics in the early universe. To study inflationary models with this tool, we need to know how the waveform of SIGWs is related to the shape of the scalar power spectrum. We propose two parameterizations to approximate the scalar power spectrum with either a sharp or a broad spike at small scales, and then use these two parameterizations to study the relation between the shapes of $\\Omega_{GW}$ and the scalar power spectrum. We find that the waveform of SIGWs has a similar shape to the power spectrum. Away from the peak of the spike, the frequency relation $\\Omega_{GW}(k)\\sim \\mathcal{P}_\\zeta^2(k)$ holds independent of the functional form of the scalar power spectrum. We also give a physical explanation for this general relationship. The general relation is useful for determining the scalar power spectrum and probing inflationary physics with the waveform of SIGWs.'\nauthor:\n- Fengge Zhang\n- Arshad Ali\n- Yungui Gong\n- Jiong Lin\n- Yizhou Lu\ntitle: On the waveform of the scalar induced gravitational waves\n---\n\nIntroduction\n============\n\nAfter the detection of gravitational waves (GWs) by the Laser Interferometer Gravitational-Wave Observatory (LIGO) scientific collaboration" +"---\nabstract: 'We argue that dark energy with multiple fields is theoretically well-motivated and predicts distinct observational signatures, in particular when cosmic acceleration takes place along a trajectory that is highly non-geodesic in field space. Such models provide novel physics compared to $\\Lambda$CDM and quintessence by allowing cosmic acceleration on steep potentials. From the theoretical point of view, these theories can easily satisfy the conjectured swampland constraints and may in certain cases be technically natural, potential problems which are endemic to standard single-field dark energy. Observationally, we argue that while such multi-field models are likely to be largely indistinguishable from the concordance cosmology at the background level, dark energy perturbations can cluster, leading to an enhanced growth of large-scale structure that may be testable as early as the next generation of cosmological surveys.'\nauthor:\n- Yashar Akrami\n- Misao Sasaki\n- 'Adam R. Solomon'\n- Valeri Vardanyan\nbibliography:\n- 'refs.bib'\ntitle: 'Multi-field dark energy: cosmic acceleration on a steep potential'\n---\n\n[**[[Introduction.]{}]{}**]{}\u2014\n\nDark energy beyond the cosmological standard model is usually studied in the context of theories with a single scalar field, such as quintessence [@Copeland:2006wr] or scalar-tensor gravity [@Clifton:2011jh]. While this is primarily motivated by simplicity, physically-realistic models often" +"---\nabstract: 'In this work, an efficient approximation scheme has been proposed for getting accurate approximate solution of nonlinear partial differential equations with constant or variable coefficients satisfying initial conditions in a series of exponential instead of an algebraic function of independent variables. As a consequence: i) the convergence of the series found to be faster than the same obtained by few other methods and ii) the exact analytic solution can be obtained from the first few terms of the series of the approximate solution, in cases the equation is integrable. The convergence of the sum of the successive correction terms has been established and an estimate of the error in the approximation has also been presented. The efficiency of the present method has been illustrated through some examples with a variety of nonlinear terms present in the equation.'\naddress: |\n $^1$Department of Mathematics, Trivenidevi Bhalotia College, Raniganj-713 347, Burdwan, West Bengal, India\\\n $^2$Department of Mathematics, Visva-Bharati, Santiniketan - 731 235, West Bengal, India \nauthor:\n- Prakash Kumar Das$^1$\n- 'M.M. Panja$^2$'\nbibliography:\n- 'wavelike.bib'\ntitle: 'A rapidly convergent approximation scheme for nonlinear autonomous and non-autonomous wave-like equations '\n---\n\nAutonomous and non-autonomous wave-like equations ,Rapidly convergent approximation scheme ,Accurate" +"---\nabstract: |\n In this thesis we look into programming by example (PBE), which is about finding a program mapping given inputs to given outputs. PBE has traditionally seen a split between formal versus neural approaches, where formal approaches typically involve deductive techniques such as SAT solvers and types, while the neural approaches involve training on sample input-outputs with their corresponding program, typically using sequence-based machine learning techniques such as LSTMs\u00a0[@lstm]. As a result of this split, programming types had yet to be used in neural program synthesis techniques.\n\n We propose a way to incorporate programming types into a neural program synthesis approach for PBE. We introduce the Typed Neuro-Symbolic Program Synthesis (TNSPS) method based on this idea, and test it in the functional programming context to empirically verify type information may help improve generalization in neural synthesizers on limited-size datasets.\n\n Our TNSPS model builds upon the existing Neuro-Symbolic Program Synthesis (NSPS)\u00a0[@nsps], a tree-based neural synthesizer combining info from input-output examples plus the current program, by further exposing information on types of those input-output examples, of the grammar production rules, as well as of the hole that we wish to expand in the program.\n\n We further explain how" +"---\nbibliography:\n- 'reference.bib'\n---\n\nIntroduction\n============\n\nThis paper studies the sparse principal component analysis (SPCA) problem of the form $$\\begin{aligned}\n\\label{spca}\n \\text{\\rm (SPCA)} \\quad w^{*} := \\max_{\\bm{x} \\in \\Re^n} \\left \\{\\bm{x}^{\\top}\\bm{A}\\bm{x}: {||\\bm{x}||_{2}=1}, {||\\bm{x}||_0 = k} \\right \\},\n \\end{aligned}$$ where the symmetric positive semi-definite matrix $\\bm{A} \\in \\Re^{n\\times n}$ denotes the sample covariance out of a dataset with $n$ features and the integer $k \\in [n]$ denotes the sparsity of its first principal component (PC). In SPCA , the objective is to select the best size-$k$ principal submatrix from a covariance matrix $\\bm{A}$ with the maximum largest eigenvalue. Compared to the conventional PCA, the extra zero-norm constraint $||\\bm{x}||_0 = k$ in SPCA restricts the number of features of the first PC $\\bm x$ to be $k$ most important ones. In this way, SPCA improves the interpretability of the obtained PC, which has been shown as early as @jeffers1967two in 1967. It is also recognized that SPCA can be more reliable for large-scale datasets than PCA, where the number of features is far more than that of observations [@zhang2011large]. These advantages of SPCA have benefited many application fields such as biology, finance, cloud computing, and healthcare, which frequently deal with datasets" +"---\nabstract: 'Given a Wilson action invariant under global chiral transformations, we can construct current composite operators in terms of the Wilson action. The short distance singularities in the multiple products of the current operators are taken care of by the exact renormalization group. The Ward-Takahashi identity is compatible with the finite momentum cutoff of the Wilson action. The exact renormalization group and the Ward-Takahashi identity together determine the products. As a concrete example, we study the Gaussian fixed-point Wilson action of the chiral fermions to construct the products of current operators.'\nauthor:\n- 'H.\u00a0Sonoda'\nbibliography:\n- 'paper.bib'\ntitle: |\n Products of Current Operators\\\n in the Exact Renormalization Group Formalism\n---\n\nIntroduction\\[sec-introduction\\]\n================================\n\nIt is a principle of quantum field theory that the invariance of a theory under a continuous transformation implies the conservation of a current. When a theory is expressed by a Wilson action with a finite momentum cutoff, the principle holds for the Wilson action. In [@Sonoda:2015pva] an energy-momentum tensor was constructed from the invariance of the Wilson action under translations and rotations. In this paper we would like to consider the Wilson action of chiral fermions with global flavor symmetry to construct multiple products of" +"---\nabstract: 'A gradient discretisation method (GDM) is an abstract setting that designs the unified convergence analysis of several numerical methods for partial differential equations and their corresponding models. In this paper, we study the GDM for anisotropic reaction diffusion problems, based on a general reaction term, with Neumann and Dirichlet boundary conditions. With natural regularity assumptions on the exact solution, the framework enables us to provide proof of the existence of weak solutions for the problem, and to obtain a uniform\u2013in\u2013time convergence for the discrete solution and a strong convergence for its discrete gradient. It also allows us to apply non conforming numerical schemes to the model on a generic grid; (the Crouzeix\u2013Raviart scheme and the hybrid mixed mimetic (HMM) methods). Numerical experiments using the HMM method are performed to study the growth of glioma tumours in heterogeneous brain environment. The dynamics of their highly diffusive nature is also measured using the fraction anisotropic measure. The validity of the HMM is examined further using four different mesh types. The results indicate that the dynamics of the brain tumour is still captured by the HMM scheme, even in the event of a highly heterogeneous anisotropic case performed on the mesh" +"---\nabstract: 'The concept of the *Internet of Things* (IoT) first appeared a few decades ago. Today, by the ubiquitous wireless connectivity, the boost of machine learning and artificial intelligence, and the advances in big data analytics, it is safe to say that IoT has evolved to a new concept called the *Internet of Everything* (IoE) or the *Internet of All*. IoE has four pillars: Things, human, data, and processes, which render it as an inhomogeneous large-scale network. A crucial challenge of such a network is to develop management, analysis, and optimization policies that besides utility-maximizer machines, also take irrational humans into account. We discuss several networking applications in which appropriate modeling of human decision-making is vital. We then provide a brief review of computational models of human decision-making. Based on one such model, we develop a solution for a task offloading problem in fog computing and we analyze the implications of including humans in the loop.'\nauthor:\n- '[^1]'\nbibliography:\n- 'Main.bib'\ntitle: 'Computational Models of Human Decision-Making With Application to the Internet of Everything'\n---\n\n[*Keywords*]{}: Cognitive hierarchy, Decision-making, Human agent, IoE, Prospect theory, Social preference\n\nIntroduction {#sec:Intro}\n============\n\nIntegrating the human element in digital technology is a" +"---\nabstract: 'The residual torsion-free nilpotence of the commutator subgroup of a knot group has played a key role in studying the bi-orderability of knot groups [@PerRolf03; @LRR08; @CDN16; @John20a]. A technique developed by Mayland [@May75] provides a sufficient condition for the commutator subgroup of a knot group to be residually-torsion-free nilpotent using work of Baumslag [@Baum67; @Baum69]. In this paper, we apply Mayland\u2019s technique to several genus one pretzel knots and a family of pretzel knots with arbitrarily high genus. As a result, we obtain a large number of new examples of knots with bi-orderable knot groups. These are the first examples of bi-orderable knot groups for knots which are not fibered or alternating.'\naddress: 'Department of Mathematics, University of Texas at Austin, Austin, TX'\nauthor:\n- Jonathan Johnson\nbibliography:\n- 'jcj\\_bib.bib'\ntitle: 'Residual Torsion-Free Nilpotence, Bi-Orderability and Pretzel Knots'\n---\n\nIntroduction\n============\n\nLet $J$ be a knot in $S^3$. The *knot exterior* of $J$ is $M_J:=S^3-\\nu(J)$ where $\\nu(J)$ is the interior of a tubular neighborhood of $J$, and the *knot group* of $J$ is $\\pi_1(M_J)$. Denote the Alexander polynomial of $J$ by $\\Delta_J$.\n\nA group $\\Gamma$ is *nilpotent* if it\u2019s lower central series terminates (is trivial) after finitely" +"---\nabstract: 'The attitude of a rigid body evolves on the three-dimensional special orthogonal group, and it is often estimated by measuring reference directions, such as gravity or magnetic field, using an onboard sensor. As a single direction measurement provides a two-dimensional constraint, it has been widely accepted that at least two non-parallel reference directions should be measured, or the reference direction should change over time, to determine the attitude completely. This paper uncovers an intriguing fact that the attitude can actually be estimated by using multiple measurements of a single, fixed reference direction, provided that the angular velocity and the direction measurements are resolved in appropriate frames, respectively. More specifically, after recognizing that the attitude uncertainties propagated over the left-trivialized stochastic kinematics are distinct from those over the right-trivialized one, stochastic attitude observability with single direction measurements is formulated by an information theoretic analysis. These are further illustrated by numerical simulations and experiments.'\nauthor:\n- 'Weixin Wang, Kanishke Gamagedara, and Taeyoung Lee[^1][^2]'\ntitle: On the Observability of Attitude with Single Direction Measurements\n---\n\nIntroduction\n============\n\nThe attitude of a rigid body is the orientation of its body-fixed frame relevant to another reference frame. It is defined by the coordinates" +"---\nabstract: |\n In $k$-ported message-passing systems, a processor can simultaneously receive $k$ different messages from $k$ other processors, and send $k$ different messages to $k$ other processors that may or may not be different from the processors from which messages are received. Modern clustered systems may not have such capabilities. Instead, compute nodes consisting of $n$ processors can simultaneously send and receive $k$ messages from other nodes, by letting $k$ processors on the nodes concurrently send and receive at most one message. We pose the question of how to design good algorithms for this $k$-lane model, possibly by adapting algorithms devised for the traditional $k$-ported model.\n\n We discuss and compare a number of (non-optimal) $k$-lane algorithms for the broadcast, scatter and alltoall collective operations (as found in, [e.g.]{}, MPI), and experimentally evaluate these on a small $36\\times 32$-node cluster with a dual OmniPath network (corresponding to $k=2$). Results are preliminary.\nauthor:\n- |\n Jesper Larsson Tr\u00e4ff\\\n TU Wien, Faculty of Informatics, Institute of Computer Engineering 191-4\\\n Favoritenstrasse 16/3rd floor, 1040 Vienna, Austria\nbibliography:\n- 'traff.bib'\n- 'parallel.bib'\ntitle: '$k$-ported [vs.]{}$k$-lane Broadcast, Scatter, and Alltoall Algorithms'\n---\n\nIntroduction\n============\n\nWe pose the problem of designing good collective communication algorithms for" +"---\nabstract: 'We investigate the relationship between environment and the galaxy main sequence (the relationship between stellar mass and star formation rate) and also the relationship between environment and radio luminosity (P$_{\\rm 1.4GHz}$) to shed new light on the effects of the environments on galaxies. We use the VLA-COSMOS 3 GHz catalogue that consists of star-forming galaxies (SFGs) and quiescent galaxies (AGN) in three different environments (field, filament, cluster) and for three different galaxy types (satellite, central, isolated). We perform for the first time a comparative analysis of the distribution of SFGs with respect to the main sequence (MS) consensus region from the literature, taking into account galaxy environment and using radio observations at 0.1 $\\leq$ z $\\leq$ 1.2. Our results corroborate that SFR is declining with cosmic time which is consistent with the literature. We find that the slope of the MS for different $z$ and M$_{*}$ bins is shallower than the MS consensus with a gradual evolution towards higher redshift bins, irrespective of environments. We see no SFR trends on both environments and galaxy type given the large errors. In addition, we note that the environment does not seem to be the cause of the flattening of MS" +"---\nabstract: 'We study the origin of homoclinic chaos in the classical 3D model proposed by O.\u00a0 R\u00f6ssler in 1976. Of our particular interest are the convoluted bifurcations of the Shilnikov saddle-foci and how their synergy determines the global unfolding of the model, along with transformations of its chaotic attractors. We apply two computational methods proposed, 1D return maps and a symbolic approach specifically tailored to this model, to scrutinize homoclinic bifurcations, as well as to detect the regions of structurally stable and chaotic dynamics in the parameter space of the R\u00f6ssler model.'\nauthor:\n- Semyon Malykh\n- Yuliya Bakhanova\n- Alexey Kazakov\n- Krishna Pusuluri\n- Andrey Shilnikov\ntitle: Homoclinic chaos in the R\u00f6ssler model\n---\n\n**This paper is dedicated to Otto R\u00f6ssler on the occasion of his 80th anniversary. He, being one of the pioneers in the chaosland, proposed a number of simple models with chaotic\u00a0[@Rossler1976; @Rossler1979] and hyper-chaotic\u00a0[@Rossler1979hyp] dynamics that became classics in the field of applied dynamical systems. The goal of our paper is to examine and articulate the pivotal role and interplay of two Shilnikov saddle-foci [@Shilnikov1965] in the famous 3D R\u00f6ssler model as they shape the topology of the chaotic attractors such" +"---\nabstract: 'The aim of the present work is to derive a error estimates for the Laplace eigenvalue problem in mixed form, by means of a virtual element method. With the aid of the theory for non-compact operators, we prove that the proposed method is spurious free and convergent. We prove optimal order error estimates for the eigenvalues and eigenfunctions. Finally, we report numerical tests to confirm the theoretical results together with a rigorous computational analysis of the effects of the stabilization in the computation of the spectrum.'\naddress:\n- 'Departamento de Matem\u00e1tica, Universidad del B\u00edo-B\u00edo, Casilla 5-C, Concepci\u00f3n, Chile.'\n- 'Departamento de Ciencias Exactas, Universidad de Los Lagos, Casilla 933, Osorno, Chile.'\nauthor:\n- Felipe Lepe\n- Gonzalo Rivera\ntitle: 'A priori error analysis for a mixed VEM discretization of the spectral problem for the Laplacian operator.'\n---\n\nMixed virtual element method ,Laplace eigenvalue problem ,error estimates 35P15,35Q35,65N15 ,65N30 ,76B15.\n\nIntroduction {#SEC:INTR}\n============\n\nIn the recent years, the virtual element method (VEM), which is a generalization of the classic finite element method to polygonal meshes, has shown important breakthroughs in the numerical resolution of partial differential equations. The eigenvalue problems are a subject of study where the classic numerical" +"---\nabstract: |\n We study the problem of efficiently *refuting* the $k$-colorability of a graph, or equivalently *certifying* a lower bound on its chromatic number. We give formal evidence of average-case computational hardness for this problem in sparse random regular graphs, showing optimality of a simple spectral certificate. This evidence takes the form of a *computationally-quiet planting*: we construct a distribution of $d$-regular graphs that has significantly smaller chromatic number than a typical regular graph drawn uniformly at random, while providing evidence that these two distributions are indistinguishable by a large class of algorithms. We generalize our results to the more general problem of certifying an upper bound on the maximum $k$-cut.\n\n This quiet planting is achieved by minimizing the effect of the planted structure (e.g.\u00a0colorings or cuts) on the graph spectrum. Specifically, the planted structure corresponds exactly to eigenvectors of the adjacency matrix. This avoids the pushout effect of random matrix theory, and delays the point at which the planting becomes visible in the spectrum or local statistics. To illustrate this further, we give similar results for a Gaussian analogue of this problem: a quiet version of the spiked model, where we plant an eigenspace rather than adding" +"---\nabstract: 'We investigate the behavior of the Dirac spinor fields in general relativistic high density stellar backgrounds and the possibility of spontaneous spinorization which is analogous to spontaneous scalarization. We consider the model with the modified kinetic term of the Dirac field by the insertion of the fifth gamma matrix ${\\hat \\gamma}^5$ and the conformal coupling of the Dirac spinor field to the matter sector, which would lead to the tachyonic growth of the Dirac spinor field in the high density compact stellar backgrounds. In order to obtain the static and spherically symmetric solutions, we have to consider the two Dirac fields at the same time. We show that in the constant density stellar backgrounds our model gives rise to the nontrivial solutions of the Dirac spinor fields with any number of nodes, where one mode has one more node than the other. We also show that at the leading order all the components of the effective energy-momentum tensor of the Dirac spinor fields, after the summation over the two fields, vanish for any separable time-dependent ansatz of the Dirac spinor fields in any static and spherically symmetric spacetime backgrounds, indicating that spontaneous spinorization takes place as a stealth" +"---\nabstract: '[*We consider a Canonical Polyadic (CP) decomposition approach to low-rank tensor completion (LRTC) by incorporating external pairwise similarity relations through graph Laplacian regularization on the CP factor matrices. [The usage of graph regularization entails benefits in the learning accuracy of LRTC, but at the same time, induces coupling graph Laplacian terms that hinder the optimization of the tensor completion model.]{} [In order to solve graph-regularized LRTC, we propose efficient alternating minimization algorithms by leveraging the block structure of the underlying CP decomposition-based model. For the subproblems of alternating minimization, a linear conjugate gradient subroutine is specifically adapted to graph-regularized LRTC. Alternatively, we circumvent the complicating coupling effects of graph Laplacian terms by using an alternating directions method of multipliers.]{} Based on the Kurdyka-[\u0141]{}ojasiewicz property, we show that the sequence generated by the proposed algorithms globally converges to a critical point of the objective function. Moreover, the complexity and convergence rate are also derived. In addition, numerical experiments including synthetic data and real data [show that the graph regularized tensor completion model has improved recovery results compared to those without graph regularization, and that the proposed algorithms achieve gains in time efficiency over existing algorithms.]{}* ]{}'\nauthor:\n- 'Yu" +"---\nabstract: 'Quantum codes typically rely on large numbers of degrees of freedom to achieve low error rates. However each additional degree of freedom introduces a new set of error mechanisms. Hence minimizing the degrees of freedom that a quantum code utilizes is helpful. One quantum error correction solution is to encode quantum information into one or more bosonic modes. We revisit rotation-invariant bosonic codes, which are supported on Fock states that are gapped by an integer $g$ apart, and the gap $g$ imparts number shift resilience to these codes. Intuitively, since phase operators and number shift operators do not commute, one expects a trade-off between resilience to number-shift and rotation errors. Here, we obtain results pertaining to the non-existence of approximate quantum error correcting $g$-gapped single-mode bosonic codes with respect to Gaussian dephasing errors. We show that by using arbitrarily many modes, $g$-gapped multi-mode codes can yield good approximate quantum error correction codes for any finite magnitude of Gaussian dephasing and amplitude damping errors.'\nauthor:\n- |\n \\\n \\\n \\\nbibliography:\n- 'bosonic-dephasing.bib'\ntitle: 'Trade-offs on number and phase shift resilience in bosonic quantum codes'\n---\n\nIntroduction\n============\n\nTraditionally, quantum error correction is studied on physical systems comprising of" +"---\nabstract: 'In the university timetabling problem, sometimes additions or cancellations of course sections occur shortly before the beginning of the academic term, necessitating last-minute teaching staffing changes. We present a decision-making framework that both minimizes the number of course swaps, which are inconvenient to faculty members, and maximizes faculty members\u2019 preferences for times they wish to teach. The model is formulated as an integer linear program (ILP). Numerical simulations for a hypothetical mid-sized academic department are presented.'\nauthor:\n- 'Jakob Kotas[^1]'\n- 'Peter Pham[^2]'\n- Sam Koellmann\nbibliography:\n- 'lmrp.bib'\ntitle: |\n Optimal minimal-perturbation university\\\n timetabling with faculty preferences\n---\n\nKeywords: scheduling, university timetabling, integer program, minimal perturbation\n\nIntroduction {#sec:intro}\n============\n\nScheduling and assignment problems have long been a focus of study in the operations research community. In particular, the problem of scheduling university courses has been investigated by a number of authors dating back at least to the 1970s.[@breslaw; @shih] A number of models for university course scheduling, also known as timetabling, have been proposed. The overall problem contains subproblems including (a) determining time slots that courses may be offered during, (b) deciding which course is assigned to a particular room at a given time, (c) deciding which" +"---\nabstract: 'Contrastive self-supervised learning (CSL) is an approach to learn useful representations by solving a pretext task which selects and compares anchor, negative and positive (APN) features from an unlabeled dataset. We present a conceptual framework which characterizes CSL approaches in five aspects (1) data augmentation pipeline, (2) encoder selection, (3) representation extraction, (4) similarity measure, and (5) loss function. We analyze three leading CSL approaches\u2013AMDIM, CPC and SimCLR\u2013, and show that despite different motivations, they are special cases under this framework. We show the utility of our framework by designing **Y**et **A**nother DIM (**YADIM**) which achieves competitive results on CIFAR-10, STL-10 and ImageNet, and is more robust to the choice of encoder and the representation extraction strategy. To support ongoing CSL research, we release the PyTorch implementation of this conceptual framework along with standardized implementations of AMDIM, CPC (V2), SimCLR, BYOL, Moco (V2) and YADIM.'\nauthor:\n- |\n William Falcon\\\n New York University, NY\\\n Lightning Labs, NY\\\n `waf251@nyu.edu`\\\n Kyunghyun Cho\\\n New York University, NY\\\n `kc119@nyu.edu`\\\nbibliography:\n- 'neurips\\_2020.bib'\ntitle: 'A Framework For Contrastive Self-Supervised Learning And Designing A New Approach'\n---\n\nIntroduction\n============\n\nA goal of self-supervised learning is to learn to extract representations of an input using" +"---\nauthor:\n- Ting He\nbibliography:\n- 'cas-refs.bib'\ndate: 'ting.he@cueb.edu.cn'\ntitle: '**Nonparametric Predictive Inference for Asian options**'\n---\n\n**ABSTRACT**\n\nAsian option, as one of the path-dependent exotic options, is widely traded in the energy market, either for speculation or hedging. However, it is hard to price, especially the one with the arithmetic average price. The traditional trading procedure is either too restrictive by assuming the distribution of the underlying asset or less rigorous by using the approximation. It is attractive to infer the Asian option price with few assumptions of the underlying asset distribution and adopt to the historical data with a nonparametric method. In this paper, we present a novel approach to price the Asian option from an imprecise statistical aspect. Nonparametric Predictive Inference (NPI) is applied to infer the average value of the future underlying asset price, which attempts to make the prediction reflecting more uncertainty because of the limited information. A rational pairwise trading criterion is also proposed in this paper for the Asian options comparison, as a risk measure. The NPI method for the Asian option is illustrated in several examples by using the simulation techniques or the empirical data from the energy market.\n\nKey words:" +"---\nabstract: 'We introduce and systematically study a profile function whose asymptotic behavior quantifies the dimension or the size of a metric approximation of a finitely generated group $G$ by a family of groups $\\mathcal{F}=\\{(G_{\\alpha},d_{\\alpha}, k_{\\alpha}, \\varepsilon _{\\alpha })\\}_{\\alpha\\in I, }$ where each group $G_\\alpha$ is equipped with a bi-invariant metric $d_{\\alpha}$ and a dimension $k_{\\alpha}$, for strictly positive real numbers $\\varepsilon _{\\alpha }$ such that $\\inf_{\\alpha }\\varepsilon _{\\alpha }>0$. Through the notion of a residually amenable profile that we introduce, our approach generalizes classical isoperimetric (aka F\u00f8lner) profiles of amenable groups and recently introduced functions quantifying residually finite groups. Our viewpoint is much more general and covers hyperlinear and sofic approximations as well as many other metric approximations such as weakly sofic, weakly hyperlinear, and linear sofic approximations.'\naddress:\n- |\n Universit\u00e4t Wien, Fakult\u00e4t f\u00fcr Mathematik\\\n Oskar-Morgenstern-Platz 1, 1090 Wien, Austria.\n- 'Universit\u00e9 de Gen\u00e8ve, Section de Math\u00e9matiques, 2-4 rue du Li\u00e8vre, Case postale 64, 1211 Gen\u00e8ve 4, Switzerland. '\nauthor:\n- Goulnara Arzhantseva\n- 'Pierre-Alain Cherix'\nbibliography:\n- 'quantify.bib'\ntitle: Quantifying metric approximations of discrete groups\n---\n\n[^1]\n\nIntroduction\n============\n\nApproximation is ubiquitous in mathematics. In the theory of groups, it is particularly natural to approximate infinite groups" +"---\nabstract: 'We report the discovery of a planetary system orbiting [TOI-763]{}(aka CD-397945), a $V=10.2$, high proper motion G-type dwarf star that was photometrically monitored by the [*TESS*]{}\u00a0space mission in Sector 10. We obtain and model the stellar spectrum and find an object slightly smaller than the Sun, and somewhat older, but with a similar metallicity. Two planet candidates were found in the light curve to be transiting the star. Combining [*TESS*]{}\u00a0transit photometry with HARPS high-precision radial velocity follow-up measurements confirm the planetary nature of these transit signals. We determine masses, radii, and bulk densities of these two planets. A third planet candidate was discovered serendipitously in the radial velocity data. The inner transiting planet, [TOI-763]{}b, has an orbital period of $P_\\mathrm{b}$=5.6\u00a0days, a mass of $M_\\mathrm{b}$=$9.8\\pm0.8$$M_\\oplus$, and a radius of $R_\\mathrm{b}$=$2.37\\pm0.10$$R_\\oplus$. The second transiting planet, [TOI-763]{}c, has an orbital period of $P_\\mathrm{c}$=12.3\u00a0days, a mass of $M_\\mathrm{c}$=$9.3\\pm1.0$$M_\\oplus$, and a radius of $R_\\mathrm{c}$=$2.87\\pm0.11$$R_\\oplus$. We find the outermost planet candidate to orbit the star with a period of $\\sim$48\u00a0days. If confirmed as a planet it would have a minimum mass of $M_\\mathrm{d}$=$9.5\\pm1.6$$M_\\oplus$. We investigated the [*TESS*]{}\u00a0light curve in order to search for a mono transit by planet" +"---\nabstract: 'The response of cell populations to external stimuli plays a central role in biological phenomena such as epithelial wound healing and developmental morphogenesis. Wave propagation of a protein signal has been shown to direct collective migration in one direction, but this mechanism based on active matter under a traveling wave is not fully understood. To elucidate how the traveling wave of the protein signal directs collective migration, we study the mechanics of the epithelial cell monolayer, taking into account the signal-dependent coordination of contractile stress and cellular orientation. By constructing an optogenetically controlled experimental cell model, we found that local signal activation induces changes in cell density and orientation with the direction of propagation, increasing the net motion relative to the wave. In addition, the migration exhibits an optimal speed with the propagation speed of the signal. This occurs because the mechanical deformation is not fully relaxed for rapid signal activation, resulting in an optimal signal propagation speed. The presented mechanical model can also be extended to wound healing, providing a versatile model for understanding the interplay between mechanics and signaling in active matter.'\nauthor:\n- Tatsuya Fukuyama\n- Hiroyuki Ebata\n- Akihisa Yamamoto\n- Ryo Ienaga\n-" +"---\nabstract: 'Bandit and reinforcement learning (RL) problems can often be framed as optimization problems where the goal is to maximize average performance while having access only to stochastic estimates of the true gradient. Traditionally, stochastic optimization theory predicts that learning dynamics are governed by the curvature of the loss function and the noise of the gradient estimates. In this paper we demonstrate that this is not the case for bandit and RL problems. To allow our analysis to be interpreted in light of multi-step MDPs, we focus on techniques derived from stochastic optimization principles\u00a0(e.g., natural policy gradient and EXP3) and we show that some standard assumptions from optimization theory are violated in these problems. We present theoretical results showing that, at least for bandit problems, curvature and noise are not sufficient to explain the learning dynamics and that seemingly innocuous choices like the baseline can determine whether an algorithm converges. These theoretical findings match our empirical evaluation, which we extend to multi-state MDPs.'\nauthor:\n- 'Wesley Chung[^1]'\n- Valentin Thomas\n- 'Marlos C. Machado'\n- Nicolas Le Roux\nbibliography:\n- 'full.bib'\ntitle: |\n Beyond variance reduction: Understanding\\\n the true impact of baselines on policy optimization\n---\n\nIntroduction\n============" +"---\nabstract: 'Motivated by the recent discovery of superconductivity in infinite-layer nickelates RE$_{1-\\delta}$Sr$_\\delta$NiO$_2$ (RE$=$Nd, Pr), we study the role of Hund\u2019s coupling $J$ in a quarter-filled two-orbital Hubbard model which has been on the periphery of the attention. A region of negative effective Coulomb interaction of this model is revealed to be differentiated from three- and five-orbital models in their typical Hund\u2019s metal active fillings. We identify distinctive regimes including four different correlated metals, one of which stems from the proximity to a Mott insulator while the other three, which we call \u201cintermediate\" metal, weak Hund\u2019s metal, and valence-skipping metal, from the effect of $J$ being away from Mottness. Defining criteria characterizing these metals are suggested, establishing the existence of Hund\u2019s metallicity in two-orbital systems.'\nauthor:\n- Siheon Ryee\n- Myung Joon Han\n- Sangkook Choi\nbibliography:\n- 'ref.bib'\ntitle: 'Hund physics landscape of two-orbital system'\n---\n\nA novel route to the electron correlation, which has attracted a great deal of attention over the last fifteen years, is on-site Hund\u2019s coupling $J$ [@Georges]. This energy scale favors high-spin configurations on each atom lifting the degeneracy of atomic multiplets. In multiorbital systems away from half-filling, an intriguing correlated metallic regime dubbed" +"---\nauthor:\n- |\n Pedro Pessoa$^1$, Bruno Arderucio Costa$^2$\\\n $^1$Department of Physics, University at Albany - SUNY\\\n Albany, NY, USA\\\n $^2$ Department of Physics and Astronomy, University of British Columbia\\\n Vancouver, BC, Canada\ntitle: 'Comment on \u201cBlack Hole Entropy: A Closer Look\u201d'\n---\n\nIntroduction\n============\n\n#### \n\nIn his paper, Tsallis [@Tsallis20] presents some questionable statements on black hole entropy. He affirms that since Bekeinstein-Hawking (BH) entropy is not proportional to the black hole volume Boltzmann-Gibbs (BG) entropy would be inappropriate to describe black holes. He then proposes a different non-additive functional meant to replace entropy.\n\nThis article is organized to rebuke this idea. On section 2, we present the foundations for entropy and show how the additive entropy functional does not necessarily lead to extensivity. On section 3, we revisit the laws of black hole thermodynamics and how they can be accurately accounted for on the basis of additive entropy.\n\nEntropic Foundations\n====================\n\n#### \n\nThe work of Jaynes [@Jaynes57; @Jaynes572] solidified ideas of Boltzmann and Gibbs for the foundations of statistical physics by designing the method of maximum entropy. Probability distributions $p(x)$ should be selected from the maximization of entropy\n\n$$\\label{KLentropy}\n S[p|q] = - \\int \\mathrm d x \\ p(x)\\ln{\\frac{p(x)}{q(x)}}" +"---\nabstract: 'A *generalized torsion element* is a non-trivial element such that some non-empty finite product of its conjugates is the identity. We construct a generalized torsion element of the fundamental group of a 3-manifold obtained by Dehn surgery along a knot in $S^{3}$.'\naddress:\n- 'Department of Mathematics, Kyoto University, Kyoto 606-8502, JAPAN'\n- 'Department of Mathematics, Nihon University, 3-25-40 Sakurajosui, Setagaya-ku, Tokyo 156\u20138550, Japan'\n- 'Department of Mathematics and Mathematics Education, Hiroshima University, 1-1-1 Kagamiyama, Higashi-Hiroshima, 739\u20138524, Japan'\nauthor:\n- Tetsuya Ito\n- Kimihiko Motegi\n- Masakazu Teragaito\ntitle: Generalized torsion and Dehn filling\n---\n\n[^1]\n\n[^2]\n\n[^3]\n\nIntroduction\n============\n\nA non-trivial element $g$ of a group $G$ is a *generalized torsion element* if there exist $x_1,\\ldots, x_k \\in G$ such that $$\\label{eqn:gtorsion}\n (x_1 g x_1^{-1})(x_2 g x_2^{-1}) \\cdots(x_{k}g x_{k}^{-1})=1.$$ That is, some non-empty finite product of its conjugates is the identity. The *order* of a generalized torsion element is the minimum $k>1$ that satisfies (\\[eqn:gtorsion\\]).\n\nOne of motivations for exploring generalized torsion elements comes from the bi-orderability of groups. A group is *bi-orderable* if it admits a *bi-ordering*, a total ordering $<$ such that $agbDeepEvent]{}, in enterprise web applications for better anomaly detection. [DeepEvent]{}includes three key features: web-specific neural networks to take into account the characteristics of sequential web events, self-supervised learning techniques to overcome the scarcity of labeled data, and sequence embedding techniques to integrate contextual events and capture dependencies among web events. We evaluate [DeepEvent]{}on web events collected from six real-world enterprise web applications. Our experimental results demonstrate that [DeepEvent]{}is effective in forecasting sequential web events and detecting web based anomalies. [DeepEvent]{}provides a context-based system for researchers and practitioners to better forecast web events with situational awareness.'\nauthor:\n- Xiaoyong Yuan\n- Lei Ding\n- Malek Ben Salem\n- Xiaolin Li\n- Dapeng Wu\nbibliography:\n- 'deep.bib'\n- 'security.bib'\n- 'web.bib'\ntitle: 'Connecting Web Event Forecasting with Anomaly Detection: A Case Study on Enterprise Web Applications Using Self-Supervised Neural Networks'\n---\n\nIntroduction {#sec:intro}\n============\n\nRecently web" +"---\nabstract: 'Pull-tabbing is an evaluation technique for functional logic programs which computes all non-deterministic results in a single graph structure. Pull-tab steps are local graph transformations to move non-deterministic choices towards the root of an expression. Pull-tabbing is independent of a search strategy so that different strategies (depth-first, breadth-first, parallel) can be used to extract the results of a computation. It has been used to compile functional logic languages into imperative or purely functional target languages. Pull-tab steps might duplicate choices in case of shared subexpressions. This could result in a dramatic increase of execution time compared to a backtracking implementation. In this paper we propose a refinement which avoids this efficiency problem while keeping all the good properties of pull-tabbing. We evaluate a first implementation of this improved technique in the Julia programming language.'\nauthor:\n- Michael Hanus1em Finn Teegen\nbibliography:\n- 'paper.bib'\ntitle: |\n Memoized Pull-Tabbing for\\\n Functional Logic Programming\n---\n\nIntroduction {#sec:Introduction}\n============\n\nFunctional logic languages [@AntoyHanus10CACM] combine the main features of functional and logic languages in a single programming model. In particular, demand-driven evaluation of expressions is amalgamated with non-deterministic search for values. This is the basis of optimal evaluation strategies [@AntoyEchahedHanus00JACM] and yields a" +"---\nabstract: 'In this paper, we consider the quasi-gas-dynamic (QGD) model in a multiscale environment. The model equations can be regarded as a hyperbolic regularization and are derived from kinetic equations. So far, the research on QGD models has been focused on problems with constant coefficients. In this paper, we investigate the QGD model in multiscale media, which can be used in porous media applications. This multiscale problem is interesting from a multiscale methodology point of view as the model problem has a hyperbolic multiscale term, and designing multiscale methods for hyperbolic equations is challenging. In the paper, we apply the constraint energy minimizing generalized multiscale finite element method (CEM-GMsFEM) combined with the leapfrog scheme in time to solve this problem. The CEM-GMsFEM provides a flexible and systematical framework to construct crucial multiscale basis functions for approximating the solution to the problem with reduced computational cost. With this approach of spatial discretization, we establish the stability of the fully discretized scheme under a relaxed version of the so-called CFL condition. Complete convergence analysis of the proposed method is presented. Numerical results are provided to illustrate and verify the theoretical findings.'\nauthor:\n- 'Boris Chetverushkin[^1]'\n- 'Eric Chung[^2]\u00a0, Yalchin Efendiev[^3]" +"---\nabstract: 'A three-level system attached to three thermal baths is manipulated to be a microscopic thermal device integrating a valve, a refrigerator, an amplifier, and a thermometer in the quantum regime, via tuning the inner coupling strength of the system and the temperatures of the external baths. We discuss the role of the inner coupling as well as the steady-state quantum coherence in these thermal functions using the Redfield master equation under a partial secular approximation. A high-sensitive thermometer for the low-temperature terminal can be established without the assistance from the inner coupling of the system. Our study of this multifunctional thermal device provides a deeper insight to the underlying quantum thermodynamics associated with the quantum coherence.'\naddress: 'Department of Physics, Zhejiang University, Hangzhou 310027, Zhejiang, China'\nauthor:\n- Yong Huangfu\n- 'Shi-fan Qi'\n- Jun Jing\nbibliography:\n- 'quantum-thermal-device.bib'\ntitle: 'A multifunctional quantum thermal device: with and without inner coupling'\n---\n\n`q`uantum thermodynamics,thermometer,inner coupling,quantum coherence\n\nIntroduction\n============\n\nClassical thermodynamics based on the macroscopic statistics has been developed over two hundred years forming a mature theoretical framework. However, it is always an interesting project to incorporate quantum mechanics into thermodynamics, which stems from a microscopic theory about discrete levels" +"---\nabstract: 'Response process data collected from human-computer interactive items contain rich information about respondents\u2019 behavioral patterns and cognitive processes. Their irregular formats as well as their large sizes make standard statistical tools difficult to apply. This paper develops a computationally efficient method for exploratory analysis of such process data. The new approach segments a lengthy individual process into a sequence of short subprocesses to achieve complexity reduction, easy clustering and meaningful interpretation. Each subprocess is considered a subtask. The segmentation is based on sequential action predictability using a parsimonious predictive model combined with the Shannon entropy. Simulation studies are conducted to assess performance of the new methods. We use the process data from PIAAC 2012 to demonstrate how exploratory analysis of process data can be done with the new approach.'\nauthor:\n- 'Zhi Wang, Xueying Tang, Jingchen Liu and Zhiliang Ying'\nbibliography:\n- 'subtask.bib'\ntitle: Subtask Analysis of Process Data Through a Predictive Model\n---\n\nIntroduction\n============\n\nTechnology advances in educational assessments expand measurable skills beyond conventional ones. For instance, 14 items for Problem Solving in Technology-Riched Environment (PSTRE) are included in the 2012 Programme for the International Assessment of Adult Competencies (PIAAC). In these items, test-takers complete real-life" +"---\nabstract: 'This paper presents the participation of [Macquarie University and the Australian National University]{}\u00a0for Task B Phase B of the 2020 BioASQ Challenge (BioASQ8b). Our overall framework implements Query focused multi-document extractive summarisation by applying either a classification or a regression layer to the candidate sentence embeddings and to the comparison between the question and sentence embeddings. We experiment with variants using BERT and BioBERT, Siamese architectures, and reinforcement learning. We observe the best results when BERT is used to obtain the word embeddings, followed by an LSTM layer to obtain sentence embeddings. Variants using Siamese architectures or BioBERT did not improve the results.'\nauthor:\n- Diego Moll\u00e1\n- Christopher Jones\n- Vincent Nguyen\nbibliography:\n- 'mybibliography.bib'\nsubtitle: '[Macquarie University and the Australian National University]{}\u00a0at BioASQ8b'\ntitle: 'Query Focused Multi-document Summarisation of Biomedical Texts[^1]'\n---\n\nIntroduction\n============\n\nQuery focused multi-document summarisation aims to generate the answer to a question by combining information from multiple documents\u00a0[@Dang:2006]. This task, therefore, is related to both question answering and text summarisation. There is substantial research in both question answering and text summarisation. In the case of text summarisation, most research focuses on single-document summarisation, and there is also substantial research" +"---\nabstract: 'We consider SISI epidemic model with discrete-time. The crucial point of this model is that an individual can be infected twice. This non-linear evolution operator depends on seven parameters and we assume that the population size under consideration is constant, so death rate is the same with birth rate per unit time. Reducing to quadratic stochastic operator (QSO) we study the dynamical system of the SISI model.'\naddress: 'Sobirjon Shoyimardonov. \u00a0\u00a0V.I.Romanovskiy institute of mathematics, 81, Mirzo Ulug\u2019bek str., 100125, Tashkent, Uzbekistan.'\nauthor:\n- 's. k. shoyimardonov'\ntitle: 'A non-linear discrete-time dynamical system related to epidemic SISI model'\n---\n\nIntroduction\n============\n\nIn [@Green] SISI model is considered in continuous time as a spread of bovine respiratory syncytial virus (BRSV) amongst cattle. They performed an equilibrium and stability analysis and considered an applications to Aujesky\u2019s disease (pseudorabies virus) in pigs. In [@Muller] SISI model was considered as an example and characterised the conditions for fixed point equation. In the both these works it was assumed that the population size under consideration is a constant, so the per capita death rate is equal to per capita birth rate.\n\nLet us consider SISI model [@Muller]: $$\\begin{cases}\n\\frac{dS}{dt} & =b(S+I+S_1+I_1)-\\mu S-\\beta_1 A(I,I_1)S\\\\[2mm]\n\\frac{dI}{dt}" +"---\nabstract: 'A key for person re-identification is achieving consistent local details for discriminative representation across variable environments. Current stripe-based feature learning approaches have delivered impressive accuracy, but do not make a proper trade-off between diversity, locality, and robustness, which easily suffers from part semantic inconsistency for the conflict between rigid partition and misalignment. This paper proposes a receptive multi-granularity learning approach to facilitate stripe-based feature learning. This approach performs local partition on the intermediate representations to operate receptive region ranges, rather than current approaches on input images or output features, thus can enhance the representation of locality while remaining proper local association. Toward this end, the local partitions are adaptively pooled by using significance-balanced activations for uniform stripes. Random shifting augmentation is further introduced for a higher variance of person appearing regions within bounding boxes to ease misalignment. By two-branch network architecture, different scales of discriminative identity representation can be learned. In this way, our model can provide a more comprehensive and efficient feature representation without larger model storage costs. Extensive experiments on intra-dataset and cross-dataset evaluations demonstrate the effectiveness of the proposed approach. Especially, our approach achieves a state-of-the-art accuracy of 96.2%@Rank-1 or 90.0%@mAP on the challenging Market-1501" +"---\nabstract: 'Motivated by a M[\u00f6]{}bius invariant subdivision scheme for polygons, we study a curvature notion for discrete curves where the cross-ratio plays an important role in all our key definitions. Using a particular M[\u00f6]{}bius invariant point-insertion-rule, comparable to the classical four-point-scheme, we construct circles along discrete curves. Asymptotic analysis shows that these circles defined on a sampled curve converge to the smooth curvature circles as the sampling density increases. We express our discrete torsion for space curves, which is not a M[\u00f6]{}bius invariant notion, using the cross-ratio and show its asymptotic behavior in analogy to the curvature.'\nauthor:\n- 'Christian M[\u00fc]{}ller and Amir Vaxman'\nbibliography:\n- 'moebiuscurvaturecircle.bib'\ntitle: 'Discrete Curvature and Torsion from Cross-Ratios'\n---\n\nIntroduction\n============\n\nMany topics in applied geometry like computer graphics, computer vision, and geometry processing in general, cover tasks like the acquisition and analysis of geometric data, its reconstruction, and further its manipulation and simulation. Numerically stable approximations of 3D-geometric notions play here a crucial part in creating algorithms that can handle such tasks. In particular the estimation of *curvatures* of curves and surfaces is needed in these geometric algorithms\u00a0[@boutin-2000; @langer+2005; @pottmann-2009-iir]. A good understanding of estimating curvatures of curves is often an" +"---\nabstract: 'Using Density Functional Theory and a thermodynamic model \\[Physical Review B 86, 134418 (2012)\\], in this paper, we provide an approach to systematically screen compounds of a given Heusler family to predict ones that can yield giant magnetocaloric effect driven by a first-order magneto-structural transition. We apply this approach to two Heusler series Ni$_{2-x}$Fe$_{x}$Mn$_{1+z-y}$Cu$_{y}$Sb$_{1-z}$ and Ni$_{2-x}$Co$_{x}$Mn$_{1+z-y}$Cu$_{y}$Sb$_{1-z}$, obtained by cosubstitution at Ni and Mn sites. We predict four new compounds with potentials to achieve the target properties. Our computations of the thermodynamic parameters, relevant for magnetocaloric applications, show that the improvement in the parameters in the predicted cosubstituted compounds can be as large as four times in comparison to the off-stoichiometric Ni-Mn-Sb and a compound derived by single substitution at the Ni site, where magnetocaloric effects have been observed experimentally. This work establishes a protocol to select new compounds that can exhibit large magnetocaloric effects and demonstrate cosubstitution as a route for more flexible tuneability to achieve outcomes, better than the existing ones.'\nauthor:\n- Sheuly Ghosh\n- Subhradip Ghosh\ntitle: 'Giant magnetocaloric effect driven by first-order magneto-structural transition in cosubstituted Ni-Mn-Sb Heusler compounds: predictions from *Ab initio* and Monte Carlo calculations'\n---\n\nIntroduction\n============\n\nThe development of magnetic" +"---\nabstract: 'We explore path planning followed by kinodynamic smoothing while ensuring the vehicle dynamics feasibility for MAVs. We have chosen a geometrically based motion planning technique RRT\\* for this purpose. In the proposed technique, we modified original RRT\\* introducing an adaptive search space and a steering function which help to increase the consistency of the planner. Moreover, we propose multiple RRT\\* which generates a set of desired paths, provided that the optimal path is selected among them. Then, apply kinodynamic smoothing, which will result in dynamically feasible as well as obstacle-free path. Thereafter, a b spline-based trajectory is generated to maneuver vehicle autonomously in unknown environments. Finally, we have tested the proposed technique in various simulated environments.'\nauthor:\n- \nbibliography:\n- 'references.bib'\ntitle: 'Path Planning Followed by Kinodynamic Smoothing for Multirotor Aerial Vehicles (MAVs)\\'\n---\n\nRRT\\*, iLQR, B-spline, OctoMap, Ellipsoidal search space\n\nIntroduction\n============\n\nWith the recent research advances in microcontroller technology and sensors capabilities, a new era has begun for MAVs. MAVs have been engaging with the plenty of applications including delivery, farming and cinematography in the recent past. Motion planning is one of the challenging tasks in almost all preceding scenarios. Subsequently, geometric based motion planning is" +"---\nauthor:\n- 'Andrea Ravenni,[!!]{}'\n- 'Matteo Rizzato,'\n- 'Sla\u0111ana Radinovi\u0107,'\n- 'Michele Liguori,'\n- 'Fabien Lacasa,'\n- and Elena Sellentin\nbibliography:\n- 'bibliografia.bib'\ntitle: 'Breaking degeneracies with the Sunyaev-Zeldovich full bispectrum'\n---\n\nIntroduction\n============\n\nThe thermal Sunyaev-Zeldovich (tSZ) effect [@Zeldovich:1969ff; @Sunyaev:1970er] [@Mroczkowski:2018nrv for a recent review] is a spectral distortion of the Cosmic Microwave Background (CMB), mostly generated in galaxy clusters by inverse Compton scattering of CMB photons off hot electrons. The tSZ effect is a powerful cosmological observable, mainly applied to the study of individual clusters, to build cluster catalogues and to extract number count statistics. A complementary possibility consists in the study of the tSZ angular power spectrum. After being originally discussed in [@Komatsu:2002wc], this approach has then been adopted as a powerful probe of the low-redshift Universe, to test both the standard $\\Lambda$CDM scenario [@Ade:2013qta; @Aghanim:2015eva] and some extended models, which encompass primordial non-Gaussianity, massive neutrinos and dark energy [@Hill:2013baa; @Roncarelli:2014jla; @Bolliet:2017lha; @McCarthy:2017csu; @Bolliet:2019zuz]. One of the advantages of the tSZ power spectrum analysis is that it allows including also small, unresolved clusters, and it does not require direct measurements of cluster masses.\n\nAs it has been argued long before the tSZ was routinely measured across" +"---\nabstract: |\n Consumers interact with firms across multiple devices, browsers, and machines; these interactions are often recorded with different identifiers for the same consumer. The failure to correctly match different identities leads to a fragmented view of exposures and behaviors. This paper studies the *identity fragmentation bias*, referring to the estimation bias resulted from using fragmented data. Using a formal framework, we decompose the contributing factors of the estimation bias caused by data fragmentation and discuss the direction of bias. Contrary to conventional wisdom, this bias cannot be signed or bounded under standard assumptions. Instead, upward biases and sign reversals can occur even in experimental settings. We then compare several corrective measures, and discuss their respective advantages and caveats.\n\n ***Keywords:*** fragmentation, cookies, bias, inference, privacy, measurement\nauthor:\n- 'Tesary Lin[^1]'\n- 'Sanjog Misra[^2]'\nbibliography:\n- 'CrossDevice.bib'\ntitle: The Identity Fragmentation Bias\n---\n\nIntroduction\n============\n\nConsumers\u2019 digital footprints are becoming increasingly fragmented. A typical consumer uses multiple devices and navigates across websites throughout an online purchase journey. Companies and websites typically track consumers via cookies, which are text files generated to identify a *user agent* (a browser-device combination) when consumers first visit a website. However, cookies are browser, device, and" +"---\nabstract: 'This paper proposes an efficient FDTD technique for determining electromagnetic fields interacting with a finite-sized 2D and 3D periodic structures. The technique combines periodic boundary conditions\u2014modelling fields away from the edges of the structure\u2014with independent simulations of fields near the edges of the structure. It is shown that this algorithm efficiently determines the size of a periodic structure necessary for fields to converge to the infinitely-periodic case. Numerical validations of the technique illustrate the savings concomitant with the algorithm.'\nauthor:\n- 'Aaron\u00a0J.\u00a0Kogon, and Costas\u00a0D.\u00a0Sarris,\u00a0'\nbibliography:\n- 'main.bib'\ntitle: 'An Expedient Approach to FDTD-based Modeling of Finite Periodic Structures'\n---\n\nIntroduction\n============\n\nPeriodic structures are common geometries in electromagnetics, appearing in such forms as frequency selective surfaces, electromagnetic band gap media, photonic crystals, antenna arrays and metasurfaces, among others.\n\nA common way to simulate infinitely periodic structures in the Finite-Difference Time-Domain (FDTD) method [@taflove2005computational] is by employing periodic boundary conditions (PBCs) [@kogon2020fdtd]. PBCs collapse an infinitely-long periodic domain into a single unit cell. PBC simulations have been successfully used to extract transmission and reflection spectra [@yang2007simple], attenuation constants [@kokkinos2006periodic; @xu2007finite], Brillouin diagrams [@kokkinos2006periodic; @chan1995order], among others.\n\nAlthough PBCs are effective at simulating infinitely-periodic structures," +"---\nabstract: 'This paper presents a Lagrangian approach to simulating multibody dynamics in a tensegrity framework with an ability to tackle holonomic constraint violations in an energy-preserving scheme. Governing equations are described using non-minimum coordinates to simplify descriptions of the structure\u2019s kinematics. To minimize constraint drift arising from this redundant system, the direct correction method has been employed in conjunction with a novel energy-correcting scheme that treats the total mechanical energy of the system as a supplementary constraint. The formulation has been extended to allow tensegrity structures with compressible bars, allowing for further discussion on potential choices for softer bar materials. The benchmark example involving a common tensegrity structure demonstrates the superiority of the presented formulation over Simscape Multibody in terms of motion accuracy as well as energy conservation. The effectiveness of the energy correction scheme is found to be increasing with the extent of deformations in the structure.'\nauthor:\n- 'Shao-Chen Hsu$^{1}$'\n- 'Vaishnav Tadiparthi$^{2}$'\n- 'Raktim Bhattacharya$^{2}$'\nbibliography:\n- 'raktim.bib'\ntitle: '**A Lagrangian Method for Constrained Dynamics in Tensegrity Systems with Compressible Bars**'\n---\n\nIntroduction\n============\n\nA tensegrity system is an arrangement of axially-loaded elements (no element bends, even though the overall structure bends), that we loosely characterize" +"---\nabstract: 'Low-frequency long-range errors (drift) are an endemic problem in 3D structure from motion, and can often hamper reasonable reconstructions of the scene. In this paper, we present a method to dramatically reduce scale and positional drift by using extended structural features such as planes and vanishing points. Unlike traditional feature matches, our extended features are able to span non-overlapping input images, and hence provide long-range constraints on the scale and shape of the reconstruction. We add these features as additional constraints to a state-of-the-art global structure from motion algorithm and demonstrate that the added constraints enable the reconstruction of particularly drift-prone sequences such as long, low field-of-view videos without inertial measurements. Additionally, we provide an analysis of the drift-reducing capabilities of these constraints by evaluating on a synthetic dataset. Our structural features are able to significantly reduce drift for scenes that contain long-spanning man-made structures, such as aligned rows of windows or planar building facades.'\nauthor:\n- |\n Aleksander Holynski$^1{}^*$, David Geraghty$^2$, Jan-Michael Frahm$^2$, Chris Sweeney$^3$, Richard Szeliski$^2$\\\n \\\n $^1$University of Washington $^2$Facebook $^3$Facebook Reality Labs\\\n- |\n Aleksander Holynski[^1]\\\n [University of Washington]{}\\\n [holynski@cs.washington.edu]{}\n- |\n David Geraghty\\\n Facebook\\\n [dger@fb.com]{}\n- |\n Jan-Michael Frahm\\\n Facebook\\\n [jmfrahm@fb.com]{}\n- |" +"---\nabstract: |\n The large-scale distribution of matter in the universe forms a network of clusters, filaments and walls enclosing large empty voids. Voids in turn can be described as a cellular system in which voids/cells define dynamically distinct regions. Cellular systems arising from a variety of physical and biological processes have been observed to closely follow scaling laws relating their geometry, topology and dynamics. These scaling laws have never been studied for cosmological voids, the largest known cellular system. Using a cosmological N-body simulation we present a study of the scaling relations of the network of voids, extending their validity by over 30 orders of magnitude in scale with respect to other known cellular systems.\n\n Scaling relations allow us to make indirect measurements of the dynamical state of voids from their geometry and topology. Using our results we interpret the \u201clocal velocity anomaly\" observed in the Leo Spur as the result of a collapsing void in our cosmic backyard.\n\n Moreover, the geometry and connectivity of voids directly depends on the curvature of space. Here we propose scaling relations as an independent and novel measure of the metric of space and discuss their use in future galaxy surveys.\nauthor:\n-" +"---\nabstract: 'We investigate the position- and momentum-space two\u2013body correlations in a weakly interacting, harmonically trapped atomic Bose-Einstein condensed gas at low temperatures. The two\u2013body correlations are computed within the Bogoliubov approximation and the peculiarities of the trapped gas are highlighted in contrast to the spatially homogeneous case. In the position space, we recover the anti\u2013bunching induced by the repulsive inter\u2013atomic interaction in the condensed fraction localized around the trap center and the bunching in the outer thermal cloud. In the momentum space, bunching signatures appear for either equal or opposite values of the momentum and display peculiar features as a function of the momentum and the temperature. In analogy to the optical Hanbury Brown and Twiss effect, the amplitude of the bunching signal at close-by momenta is fixed by the chaotic nature of the matter field state and its linewidth is shown to be set by the (inverse of the) finite spatial size of the associated in-trap momentum components. In contrast, the linewidth of the bunching signal at opposite-momenta is only determined by the condensate size.'\nauthor:\n- Salvatore Butera\n- David Cl\u00e9ment\n- Iacopo Carusotto\nbibliography:\n- 'HBT.bib'\ntitle: 'Position- and momentum-space two-body correlations in a weakly interacting" +"---\nabstract: 'State-of-the-art saliency prediction methods develop upon model architectures or loss functions; while training to generate one target saliency map. However, publicly available saliency prediction datasets can be utilized to create more information for each stimulus than just a final aggregate saliency map. This information when utilized in a biologically inspired fashion can contribute in better prediction performance without the use of models with huge number of parameters. In this light, we propose to extract and use the statistics of (a) region specific saliency and (b) temporal order of fixations, to provide additional context to our network. We show that extra supervision using spatially or temporally sequenced fixations results in achieving better performance in saliency prediction. Further, we also design novel architectures for utilizing this extra information and show that it achieves superior performance over a base model which is devoid of extra supervision. We show that our best method outperforms previous state-of-the-art methods with 50-80% fewer parameters. We also show that our models perform consistently well across all evaluation metrics unlike prior methods.'\nbibliography:\n- 'egbib.bib'\ntitle: 'RecSal : Deep Recursive Supervision for Visual Saliency Prediction'\n---\n\n[r]{}[3cm]{} ![image](osie.pdf){width=\"3cm\"}\n\nIntroduction {#sec:intro}\n============\n\nVisual saliency is the probability of" +"---\nabstract: 'Contrary to previous studies of boson peak, we analyze the density of states and specific heat contribution of dispersion forces in an amorphous solid of nano-scales ($\\sim 3 nm$). Our analysis indicates a universal semi-circle form of the average density of states in the bulk of the spectrum along with a super-exponentially increasing behavior in its edge. The latter in turn leads to a specific heat, behaving linearly below $T < 1^o \\; {\\bf K}$ even at nano-scales, and, surprisingly agreeing with the experiments although the latter are carried out at macroscopic scales. The omnipresence of dispersion forces at microscopic scales indicates the application of our results to other disordered materials too.'\nauthor:\n- Pragya Shukla\ntitle: 'Low temperature heat capacity of amorphous systems: physics at nano-scales '\n---\n\n.\n\nIntroduction\n============\n\nAt low temperatures, structurally and orientationally disordered solids such as dielectric and metallic glasses, amorphous polymers and even crystals are experimentally observed to exhibit many universalities [@zp; @stephen; @plt]. Initially the origin of this behavior was attributed to the tunnelling two level systems (TTLS) intrinsic to disordered state [@and; @phil]. The existence of TTLS entities however in a wide range of materials is not well-established (besides" +"---\nabstract: 'The novel coronavirus, COVID-19, has impacted various aspects of the world from tourism, business, education, and many more. Like for every country, the global pandemic has imposed similar effects on Ghana. During this period, citizens of this country have used social networks as a platform to find and disseminate information about the infectious disease and also share their own opinions and sentiments. In this study, we use text mining to draw insights from data collected from the social network, Twitter. Our exploration of the data led us to understand the most frequent topics raised in the Greater Accra region of Ghana from March to July 2020. We observe that the engagement of users of this social network was initially high in March but declined from April to July. The reason was probably that the people were becoming more adapted to the situation after an initial shock when the disease was announced in the country. We also found certain words in these users\u2019 tweets that enabled us to understand the sentiments and mental state of individuals at the time.'\nauthor:\n- 'Josimar E. Chire Saire'\n- 'Kobby Panford-Quainoo'\nbibliography:\n- 'biblio.bib'\ntitle: |\n Twitter Interaction to Analyze\\\n Covid-19 Impact" +"---\nabstract: 'Nonlinear extensions to the active subspaces method have brought remarkable results for dimension reduction in the parameter space and response surface design. We further develop a kernel-based nonlinear method. In particular we introduce it in a broader mathematical framework that contemplates also the reduction in parameter space of multivariate objective functions. The implementation is thoroughly discussed and tested on more challenging benchmarks than the ones already present in the literature, for which dimension reduction with active subspaces produces already good results. Finally, we show a whole pipeline for the design of response surfaces with the new methodology in the context of a parametric CFD application solved with the Discontinuous Galerkin method.'\nauthor:\n- 'Francesco\u00a0Romor[^1]'\n- 'Marco\u00a0Tezzele[^2]'\n- 'Andrea\u00a0Lario[^3]'\n- 'Gianluigi\u00a0Rozza[^4]'\ntitle: 'Kernel-based Active Subspaces with application to CFD problems using Discontinuous Galerkin method'\n---\n\nIntroduction {#sec:intro}\n============\n\nNowadays, in many industrial settings the simulation of complex systems requires a huge amount of computational power. Problems involving high-fidelity simulations are usually large-scale, moreover the number of solutions required increases with the number of parameters. In this context, we mention optimization tasks, inverse problems, optimal control problems, and uncertainty quantification; they all suffer from the curse" +"---\nabstract: 'The set of recurrent configurations of a graph together with a sum operation form the sandpile group. It is well known that recurrent sandpile configurations can be characterized as the optimal solution of certain optimization problems. In this article, we present two new integer linear programming models, one that computes recurrent configurations and other that computes the order of the configuration. Finally, by using duality of linear programming, we are able to compute the identity configuration for the cone of a regular graph.'\naddress:\n- ' Banco de M\u00e9xico, Ciudad de M\u00e9xico, M\u00e9xico. '\n- |\n Departamento de Matem\u00e1ticas\\\n Centro de Investigaci\u00f3n y de Estudios Avanzados del IPN\\\n Apartado Postal 14\u2013740\\\n 07000 Ciudad de M\u00e9xico, M\u00e9xico. \nauthor:\n- 'Carlos A. Alfaro'\n- 'Carlos E. Valencia'\n- 'Marcos\u00a0C.\u00a0Vargas'\nbibliography:\n- 'biblio.bib'\ntitle: Computing sandpile configurations using integer linear programming\n---\n\nSandpile group, recurrent configurations, integer linear programming\n\nIntroduction\n============\n\nThe [*Abelian sandpile model*]{} was firstly studied by Bak, Tang and Wiesenfeld [@Bak87; @Bak88] on integer grid graphs. It was the first example of a [*self-organized critical system*]{}, which attempts to explain the occurrence of power laws in many natural phenomena \u00a0[@Bak96] ranging on different fields like geophysics" +"---\nabstract: 'Production of scalar particles by a relativistic, semi-transparent mirror in 1+3D Minkowski spacetime based on the Barton-Calogeracos (BC) action is investigated. The corresponding Bogoliubov coefficients are derived for a mirror with arbitrary trajectory. In particular, we apply our derived formula to the gravitational collapse trajectory. In addition, we identify the relation between the particle spectrum and the particle production probability, and we demonstrate the equivalence between our approach and the existing approach in the literature, which is restricted to 1+1D. In short, our treatment extends the study to 1+3D spacetime. Lastly, we offer a third approach for finding the particle spectrum using the S-matrix formalism.'\nauthor:\n- 'Kuan-Nan Lin,${}^{1,2}$[^1] Chih-En Chou,${}^{1,2}$[^2] and Pisin Chen${}^{1,2,3}$[^3]'\ntitle: |\n Particle Production by a Relativistic Semi-transparent Mirror\\\n in 1+3D Minkowski Spacetime\n---\n\nIntroduction {#sec:intro}\n============\n\nIn 1970, Moore demonstrated [@0] that quanta of electromagnetic field may be produced from the initial vacuum state if the field is constrained in a one-dimensional cavity and subject to time-dependent Dirichlet boundary conditions in 1+1D Minkowski spacetime. This phenomenon is a manifestation of the interaction between vacuum fluctuations of the quantized field and moving boundaries. A few years later, DeWitt [@0a] showed that, for a scalar" +"---\nabstract: 'Low level classification extracts features from the elements, i.e. physical to use them to train a model for a later classification. High level classification uses high level features, the existent patterns, relationship between the data and combines low and high level features for classification. High Level features can be got from Complex Network created over the data. Local and global features are used to describe the structure of a Complex Network, i.e. Average Neighbor Degree, Average Clustering.The present work proposed a novel feature to describe the architecture of the Network following a Ant Colony System approach. The experiments shows the advantage of using this feature because the sensibility with data of different classes.'\nauthor:\n- 'Josimar E. Chire-Saire'\nbibliography:\n- 'biblio.bib'\ntitle: New feature for Complex Network based on Ant Colony Optimization\n---\n\nIntroduction\n============\n\nThe nature has a special order to establish how the things work, human beings want to understand how nature works and from many centuries they have studied the environment from Math, Physics, Chemistry, Biology and other fields. Many of the studies show the existence of many systems, repeated structures(pattern) are present in the form how they interconnect each other, i.e. cell systems, solar" +"---\nabstract: 'For dimensions $n\\geq 3$ and $k\\in\\{2, \\cdots, n\\}$, we show that the space of metrics of $k$-positive Ricci curvature on the sphere $S^{n}$ has the structure of an $H$-space with a homotopy commutative, homotopy associative product operation. We further show, using the theory of operads and results of Boardman, Vogt and May that the path component of this space containing the round metric is weakly homotopy equivalent to an $n$-fold loop space.'\naddress: |\n Department of Mathematics and Statistics\\\n Maynooth University\\\n Maynooth\\\n Ireland \nauthor:\n- Mark Walsh\n- 'David J.\u00a0Wraith'\ntitle: 'H-space and loop space structures for intermediate curvatures'\n---\n\nIntroduction {#intro}\n============\n\nThere has been a great deal of interest in recent years about the topology of the space of Riemannian metrics sastisfying given curvature conditions on a given manifold. (As a starting point for this topic, see [@TW].) Interest has been mainly directed towards studying the homotopy and (co)homology groups of these spaces of metrics, with many results demonstrating that these algebraic invariants are often non-trivial. There are, of course, other aspects of topology which are not captured by computing homotopy and homology. In this paper, we focus on the existence of $H$-space structures and" +"---\nabstract: 'Tumor segmentation in multimodal medical images has seen a growing trend towards deep learning based methods. Typically, studies dealing with this topic fuse multimodal image data to improve the tumor segmentation contour for a single imaging modality. However, they do not take into account that tumor characteristics are emphasized differently by each modality, which affects the tumor delineation. Thus, the tumor segmentation is modality- and task-dependent. This is especially the case for soft tissue sarcomas, where, due to necrotic tumor tissue, the segmentation differs vastly. Closing this gap, we develop a modality-specific sarcoma segmentation model that utilizes multimodal image data to improve the tumor delineation on each individual modality. We propose a simultaneous co-segmentation method, which enables multimodal feature learning through modality-specific encoder and decoder branches, and the use of resource-efficient densely connected convolutional layers. We further conduct experiments to analyze how different input modalities and encoder-decoder fusion strategies affect the segmentation result. We demonstrate the effectiveness of our approach on public soft tissue sarcoma data, which comprises MRI (T1 and T2 sequence) and PET/CT scans. The results show that our multimodal co-segmentation model provides better modality-specific tumor segmentation than models using only the PET or MRI (T1" +"---\nabstract: 'FASER is one of the promising experiments which search for long-lived particles beyond the Standard Model. In this paper, we focus on dark photon associating with an additional U(1) gauge symmetry, and also a scalar boson breaking this U(1) gauge symmetry. We study the sensitivity to the dark photon originated from U(1)-breaking scalar decays. We find that a sizable number of dark photon signatures can be expected in wider parameter space than previous studies.'\nauthor:\n- Takeshi Araki\n- Kento Asai\n- Hidetoshi Otono\n- Takashi Shimomura\n- Yosuke Takubo\nbibliography:\n- 'ref.bib'\ntitle: Dark Photon from Light Scalar Boson Decays at FASER \n---\n\nKYUSHU-RCAPP-2020-03\\\nUME-PP-014\n\nIntroduction {#sec:introduction}\n============\n\nFASER (ForwArd Search ExpeRiment)\u00a0[@Feng:2017uoz; @Ariga:2018pin; @Ariga:2019ufm; @Ariga:2018uku] is a new experiment to search for new light, weakly interacting, neutral particles, that are generated at proton-proton collision in the Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN). The detector will be placed 480\u00a0m downstream from the ATLAS interaction point (IP). Utilizing a large cross-section of proton-proton inelastic interaction in the forward region, FASER can realize high sensitivity to such new particles even with a compact detector. FASER will collect about 150\u00a0fb$^{-1}$ of data" +"---\nabstract: 'Gaining and understanding the flow dynamics have much importance in a wide range of disciplines, e.g. astrophysics, geophysics, biology, mechanical engineering and biomedical engineering. As a reliable way in practice, especially for turbulent flows, regional flow information such as velocity and its statistics, can be measured experimentally. Due to the poor fidelity or experimental limitations, some information may not be resolved in a region of interest. On the other hand, detailed flow features are described by the governing equations, e.g. the Navier-Stokes equations for viscous fluid, and can be resolved numerically, which is heavily dependent on the capability of either computing resources or modelling. Alternatively, we address this problem by employing the physics-informed deep learning, and treat the governing equations as a parameterised constraint to recover the missing flow dynamics. We demonstrate that with limited data, no matter from experiment or others, the flow dynamics in the region where the required data is missing or not measured, can be reconstructed with the parameterised governing equations. Meanwhile, a richer dataset, with spatial distribution of the control parameter (e.g. eddy viscosity of turbulence modellings), can be obtained. The method provided in this paper may shed light on data-driven scale-adaptive turbulent" +"---\nabstract: 'We initiate the study of multiplicative structures on cones and show that cones of Floer continuation maps fit naturally in this framework. We apply this to give a new description of the multiplicative structure on Rabinowitz Floer homology and cohomology, and to give a new proof of the Poincar\u00e9 duality theorem which relates the two. The underlying algebraic structure admits two incarnations, both new, which we study and compare: on the one hand the structure of $A_2^+$-algebra on the space ${\\mathcal{A}}$ of Floer chains, and on the other hand the structure of $A_2$-algebra involving ${\\mathcal{A}}$, its dual ${\\mathcal{A}}^\\vee$ and a continuation map from ${\\mathcal{A}}^\\vee$ to ${\\mathcal{A}}$.'\naddress:\n- 'Universit\u00e4t Augsburg Universit\u00e4tsstrasse 14, D-86159 Augsburg, Germany'\n- ' Sorbonne Universit\u00e9, Universit\u00e9 Paris Diderot, CNRS Institut de Math\u00e9matiques de Jussieu-Paris Rive Gauche, IMJ-PRG Paris, France and Universit\u00e9 de Strasbourg Institut de recherche math\u00e9matique avanc\u00e9e, IRMA Strasbourg, France'\nauthor:\n- Kai Cieliebak\n- Alexandru Oancea\nbibliography:\n- '000\\_SHpair.bib'\ntitle: Multiplicative structures on cones and duality\n---\n\nIntroduction {#sec:introduction}\n============\n\nRabinowitz Floer homology was originally defined as the Floer homology of the Rabinowitz action functional\u00a0[@CF]. An alternative description as \u201cV-shaped symplectic homology\u201d was found in\u00a0[@Cieliebak-Frauenfelder-Oancea], relating Rabinowitz Floer homology to" +"---\nabstract: 'It is generally accepted that the effective magnetic field acting on a magnetic moment is given by the gradient of the energy with respect to the magnetization. However, in *ab initio* spin dynamics within the adiabatic approximation, the effective field is also known to be exactly the negative of the constraining field, which acts as a Lagrange multiplier to stabilize an out-of-equilibrium, non-collinear magnetic configuration. We show that for Hamiltonians without mean-field parameters both of these fields are exactly equivalent, while there can be a finite difference for mean-field Hamiltonians. For density-functional theory (DFT) calculations the constraining field obtained from the auxiliary Kohn-Sham Hamiltonian is not exactly equivalent to the DFT energy gradient. This inequality is highly relevant for both *ab initio* spin dynamics and the *ab initio* calculation of exchange constants and effective magnetic Hamiltonians. We argue that the effective magnetic field and exchange constants have the highest accuracy in DFT when calculated from the energy gradient and not from the constraining field.'\nauthor:\n- Simon Streib\n- Vladislav Borisov\n- Manuel Pereiro\n- Anders Bergman\n- Erik Sj\u00f6qvist\n- Anna Delin\n- Olle Eriksson\n- Danny Thonig\ndate: 'December 8, 2020'\ntitle: 'Equation of motion and" +"---\nauthor:\n- |\n Mar\u00eda Virginia Sabando[^1] , Pavol Ulbrich, Mat\u00edas Selzer, Jan By\u0161ka,\\\n Jan Mi\u010dan, Ignacio Ponzoni, Axel J. Soto, Mar\u00eda Luj\u00e1n Ganuza, Barbora Kozl\u00edkov\u00e1\nbibliography:\n- 'references.bib'\ntitle: 'ChemVA: Interactive Visual Analysis of Chemical Compound Similarity in Virtual Screening'\n---\n\nSmall organic chemical compounds are the cornerstone of drug design. New medications are found by exploring a large number of candidate compounds or by designing new ones. In the last decades, high-throughput screening has been the main procedure applied during the early stages of the drug discovery process\u00a0[@macarron2011impact; @hertzberg2000high]. This process requires chemical synthesis, experimental testing of large libraries of against a biological target (protein), and it has a high attrition rate, which makes the process costly and time-consuming. These drawbacks stimulated the development of virtual screening methods, which computational techniques identify to a drug target. Virtual screening allows to significantly narrow down the number of drug candidate compounds at a faster pace while lowering costs\u00a0[@Lionta2014; @Yu2017]. These reasons make virtual screening an essential part of the early-stage drug discovery process.\n\nComputational techniques involved in virtual screening enable to simulate and test the fitness of the candidate compound towards the desired function without the need for" +"---\nabstract: 'A coprime array receiver processes a collection of received-signal snapshots to estimate the autocorrelation matrix of a larger (virtual) uniform linear array, known as coarray. By the received-signal model, this matrix has to be (i) Positive-Definite, (ii) Hermitian, (iii) Toeplitz, and (iv) its noise-subspace eigenvalues have to be equal. Existing coarray autocorrelation matrix estimates satisfy a subset of the above conditions. In this work, we propose an optimization framework which offers a novel estimate satisfying all four conditions. Numerical studies illustrate that the proposed estimate outperforms standard counterparts, both in autocorrelation matrix estimation error and Direction-of-Arrival estimation.'\nauthor:\n- '[^1]\\'\nbibliography:\n- 'structured\\_coprime\\_arXiv.bib'\ntitle: Structured Autocorrelation Matrix Estimation for Coprime Arrays\n---\n\n[*[**Index Terms \u2013**]{}*]{} Sensor array processing, Coprime arrays, Coarray, Autocorrelation estimation.\n\nIntroduction {#problem}\n============\n\nIn Direction-of-Arrival (DoA) estimation, coprime arrays offer increased Degrees-of-Freedom (DoF) and enable the identification of more sources than sensors compared to equal-length uniform linear arrays\u00a0[@LEUS; @AMIN3; @AMIN5; @PP2; @PP3; @PP4; @CHUNLIU3; @CHUNLIU5; @ZTAN; @GOODMAN; @PP6; @GUO1; @GUO2]. Coprime arrays have been successfully employed in applications such as beamforming design\u00a0[@PP5; @CZHOU2; @robust_beam] and space-time processing\u00a0[@CHUNLIU2], to name a few. Other non-uniform arrays with increased DoF and closed-form expressions are the" +"---\nabstract: 'In graph embedding, the connectivity information of a graph is used to represent each vertex as a point in a $d$-dimensional space. Unlike the original, irregular structural information, such a representation can be used for a multitude of machine learning tasks. Although the process is extremely useful in practice, it is indeed expensive and unfortunately, the graphs are becoming larger and harder to embed. Attempts at scaling up the process to larger graphs have been successful but often at a steep price in hardware requirements. We present [[*Gosh*]{}]{}, an approach for embedding graphs of arbitrary sizes on a single GPU with minimum constraints. [[*Gosh*]{}]{}utilizes a novel graph coarsening approach to compress the graph and minimize the work required for embedding, delivering high-quality embeddings at a fraction of the time compared to the state-of-the-art. In addition to this, it incorporates a decomposition schema that enables any arbitrarily large graph to be embedded using a single GPU with minimum constraints on the memory size. With these techniques, [[*Gosh*]{}]{}is able to embed a graph with over 65 million vertices and 1.8 billion edges in less than an hour on a single GPU and obtains a $93\\%$ AUCROC for link-prediction which can" +"---\nabstract: |\n The Susceptible-Infected-Recovered (SIR) epidemic model as well as its generalizations are extensively used for the study of the spread of infectious diseases, and for the understanding of the dynamical evolution of epidemics. From SIR type models only the model without vital dynamics has an exact analytic solution, which can be obtained in an exact parametric form. The SIR model with vital dynamics, the simplest extension of the basic SIR model, does not admit a closed form representation of the solution. However, in order to perform the comparison with the epidemiological data accurate representations of the time evolution of the SIR model with vital dynamics would be very useful. In the present paper, we obtain first the basic evolution equation of the SIR model with vital dynamics, which is given by a strongly nonlinear second order differential equation. Then we obtain a series representation of the solution of the model, by using the Adomian and Laplace-Adomian Decomposition Methods to solve the dynamical evolution equation of the model. The solutions are expressed in the form of infinite series. The series representations of the time evolution of the SIR model with vital dynamics are compared with the exact numerical solutions" +"---\nabstract: '[PSR\u00a0J0537$-$6910]{}, also known as the Big Glitcher, is the most prolific glitching pulsar known, and its spin-induced pulsations are only detectable in X-ray. We present results from analysis of 2.7\u00a0years of [[*NICER*]{}]{}\u00a0timing observations, from 2017 August to 2020 April. We obtain a rotation phase-connected timing model for the entire timespan, which overlaps with the third observing run of LIGO/Virgo, thus enabling the most sensitive gravitational wave searches of this potentially strong gravitational wave-emitting pulsar. We find that the short-term braking index between glitches decreases towards a value of 7 or lower at longer times since the preceding glitch. By combining [[*NICER*]{}]{}\u00a0and [[*RXTE*]{}]{}\u00a0data, we measure a long-term braking index $n=-1.25\\pm0.01$. Our analysis reveals 8 new glitches, the first detected since 2011, near the end of [[*RXTE*]{}]{}, with a total [[*NICER*]{}]{}\u00a0and [[*RXTE*]{}]{}\u00a0glitch activity of $8.88\\times 10^{-7}\\mbox{ yr$^{-1}$}$. The new glitches follow the seemingly unique time-to-next-glitch\u2014glitch-size correlation established previously using [[*RXTE*]{}]{}\u00a0data, with a slope of $5\\mbox{ d $\\mu$Hz$^{-1}$}$. For one glitch around which [[*NICER*]{}]{}\u00a0observes two days on either side, we search for but do not see clear evidence of spectral nor pulse profile changes that may be associated with the glitch.'\nauthor:" +"---\nabstract: 'In this paper, we propose a highly practical fully online multi-object tracking and segmentation (MOTS) method that uses instance segmentation results as an input. The proposed method is based on the Gaussian mixture probability hypothesis density (GMPHD) filter, a hierarchical data association (HDA), and a mask-based affinity fusion (MAF) model to achieve high-performance online tracking. The HDA consists of two associations: segment-to-track and track-to-track associations. One affinity, for position and motion, is computed by using the GMPHD filter, and the other affinity, for appearance is computed by using the responses from a single object tracker such as a kernalized correlation filter. These two affinities are simply fused by using a score-level fusion method such as min-max normalization referred to as MAF. In addition, to reduce the number of false positive segments, we adopt mask IoU-based merging (mask merging). The proposed MOTS framework with the key modules: HDA, MAF, and mask merging, is easily extensible to simultaneously track multiple types of objects with CPU-only execution in parallel processing. In addition, the developed framework only requires simple parameter tuning unlike many existing MOTS methods that need intensive hyperparameter optimization. In the experiments on the two popular MOTS datasets, the key" +"---\nabstract: 'Current capsule endoscopes and next-generation robotic capsules for diagnosis and treatment of gastrointestinal diseases are complex cyber-physical platforms that must orchestrate complex software and hardware functions. The desired tasks for these systems include visual localization, depth estimation, 3D mapping, disease detection and segmentation, automated navigation, active control, path realization and optional therapeutic modules such as targeted drug delivery and biopsy sampling. Data-driven algorithms promise to enable many advanced functionalities for capsule endoscopes, but real-world data is challenging to obtain. Physically-realistic simulations providing synthetic data have emerged as a solution to the development of data-driven algorithms. In this work, we present a comprehensive simulation platform for capsule endoscopy operations and introduce VR-Caps, a virtual active capsule environment that simulates a range of normal and abnormal tissue conditions (e.g., inflated, dry, wet etc.) and varied organ types, capsule endoscope designs (e.g., mono, stereo, dual and 360camera), and the type, number, strength, and placement of internal and external magnetic sources that enable active locomotion. VR-Caps makes it possible to both independently or jointly develop, optimize, and test medical imaging and analysis software for the current and next-generation endoscopic capsule systems. To validate this approach, we train state-of-the-art deep neural networks to" +"---\nabstract: |\n **Background and Objective:** Malignant melanoma (MM) is one of the deadliest types of skin cancer. Analysing dermatoscopic images plays an important role in the early detection of MM and other pigmented skin lesions. Among different computer-based methods, deep learning-based approaches and in particular convolutional neural networks have shown excellent classification and segmentation performances for dermatoscopic skin lesion images. These models can be trained end-to-end without requiring any hand-crafted features. However, the effect of using lesion segmentation information on classification performance has remained an open question.\n\n **Methods:** In this study, we explicitly investigated the impact of using skin lesion segmentation masks on the performance of dermatoscopic image classification. To do this, first, we developed a baseline classifier as the reference model without using any segmentation masks. Then, we used either manually or automatically created segmentation masks in both training and test phases in different scenarios and investigated the classification performances. The different scenarios included approaches that exploited the segmentation masks either for cropping of skin lesion images or removing the surrounding background or using the segmentation masks as an additional input channel for model training.\n\n **Results:** Evaluated on the ISIC 2017 challenge dataset which contained two binary classification" +"---\nabstract: 'We propose a novel simulation model that is able to predict the per-level churn and pass rates of Angry Birds Dream Blast, a popular mobile free-to-play game. Our primary contribution is to combine AI gameplay using Deep Reinforcement Learning (DRL) with a simulation of how the player population evolves over the levels. The AI players predict level difficulty, which is used to drive a player population model with simulated skill, persistence, and boredom. This allows us to model, e.g., how less persistent and skilled players are more sensitive to high difficulty, and how such players churn early, which makes the player population and the relation between difficulty and churn evolve level by level. Our work demonstrates that player behavior predictions produced by DRL gameplay can be significantly improved by even a very simple population-level simulation of individual player differences, without requiring costly retraining of agents or collecting new DRL gameplay data for each simulated player.'\nauthor:\n- Shaghayegh Roohi\n- Asko Relas\n- Jari Takatalo\n- Henri Heiskanen\n- Perttu H\u00e4m\u00e4l\u00e4inen\nbibliography:\n- 'references.bib'\ntitle: Predicting Game Difficulty and Churn Without Players\n---\n\n<ccs2012> <concept> <concept\\_id>10003120.10003121.10003122.10003332</concept\\_id> <concept\\_desc>Human-centered computing\u00a0User models</concept\\_desc> <concept\\_significance>500</concept\\_significance> </concept> <concept> <concept\\_id>10010147.10010341</concept\\_id> <concept\\_desc>Computing methodologies\u00a0Modeling and" +"---\nabstract: 'Question Answering systems are generally modelled as a pipeline consisting of a sequence of steps. In such a pipeline, Entity Linking (EL) is often the first step. Several EL models first perform span detection and then entity disambiguation. In such models errors from the span detection phase cascade to later steps and result in a drop of overall accuracy. Moreover, lack of gold entity spans in training data is a limiting factor for span detector training. Hence the movement towards end-to-end EL models began where no separate span detection step is involved. In this work we present a novel approach to end-to-end EL by applying the popular Pointer Network model, which achieves competitive performance. We demonstrate this in our evaluation over three datasets on the Wikidata Knowledge Graph.'\nauthor:\n- Debayan Banerjee\n- 'Debanjan Chaudhuri[^1]'\n- |\n \\\n Mohnish Dubey[ ^fnsymbol[1]{}^]{}\n- Jens Lehmann\nbibliography:\n- 'pnel.bib'\ntitle: 'PNEL: Pointer Network based End-To-End Entity Linking over Knowledge Graphs'\n---\n\nIntroduction\n============\n\nKnowledge Graph based Question Answering (KGQA) systems use a background Knowledge Graph to answer queries posed by a user. Let us take the following question as an example (Figure \\[pnel\\_exp\\]): *Who founded Tesla?*. The standard sequence of" +"---\nabstract: 'Selecting appropriate inputs for systems described by complex networks is an important but difficult problem that largely remains open in the field of control of networks. Recent work has proposed two methods for energy efficient input selection; a gradient based heuristic and a greedy approximation algorithm. We propose here an alternative method for input selection based on the analytic solution of the controllability Gramian of [the \u2018balloon graph\u2019, a special model graph that captures the role of both *distance* and *redundant paths* between a driver node and a target node.]{} The method presented is especially applicable for large networks where one is interested in controlling only a small number of outputs, or target nodes, for which current methods may not be practical because they require computing a typically very ill-conditioned matrix, called the controllability Gramian. Our method produces comparable results to the previous methods while being more computational efficient.'\nauthor:\n- \ntitle: Selecting Energy Efficient Inputs using Graph Structure\n---\n\nNetworked systems, Discrete optimization, Optimal control\n\nIntroduction\n============\n\nMany of the systems we interact with every day are described by complex networks such as social media [@bovet2019influence], the power grid [@arianos2009power; @pagani2013power], the world wide web [@barabasi2000scale], and" +"---\nabstract: 'This paper defines and develops useful concepts related to the several kinds of inductances employed in any comprehensive design-oriented ferrite-based inductor model, which is required to properly design and control high-frequency operated electronic power converters. It is also shown how to extract the necessary parameters from a ferrite material datasheet in order to get inductor models useful for a wide range of core temperatures and magnetic induction levels.'\nauthor:\n- \n- \ntitle: 'A collection of definitions and fundamentals for a design-oriented inductor model'\n---\n\nmagnetic circuit, ferrite core, major magnetic loop, minor magnetic loop, reversible inductance, amplitude inductance\n\nIntroduction\n============\n\nrrite-core based low-frequency-current biased inductors are commonly found, for example, in the LC output filter of voltage source inverters (VSI) or step-down DC/DC converters. Those inductors have to effectively filter a relatively low-amplitude high-frequency current being superimposed on a relatively large-amplitude low-frequency current. It is of paramount importance to design these inductors in a way that a minimum inductance value is always ensured which allows the accurate control and the safe operation of the electronic power converter. In order to efficiently design that specific type of inductor, a method to find the required minimum number of turns $N_{min}$" +"---\nbibliography:\n- 'main.bib'\n---\n\nNational Research University Higher School of Economics\n\n*as a manuscript*\n\nAlexander Igorevich Tyurin\n\n**Development of a method for solving structural optimization problems**\n\nPhD Dissertation Summary\n\nfor the purpose of obtaining academic degree\n\nDoctor of Philosophy in Computer Science\n\nMoscow - 2020\n\nThe PhD dissertation was prepared at International Laboratory of Stochastic Algorithms and High-Dimensional Inference, National Research University Higher School of Economics\\\n[**Academic Supervisor:**]{}\\\nAlexander Vladimirovich Gasnikov, Doctor of Sciences in Mathematical Modelling, Numerical Methods and Software Complexes, Senior Research Fellow at International Laboratory of Stochastic Algorithms and High-Dimensional Inference, National Research University Higher School of Economics\n\nIntroduction\n============\n\nOptimization methods have a significant impact on all spheres of human society. It is difficult to list all recent activities where optimization methods are used to solve practical problems. In many problems of economics, engineering, programming optimization methods are helpful. Optimization methods came up with computer engineering in the twentieth century. That is when the active development of the modern theory of optimization began. The pioneer is L. Kantorovich [@kantarovich1939; @polyakhistory], who considered linear programming problems in engineering and economics. In the 50s-60s, cutting edge works were done by G. Rubinstein, E. Ventsel, N. Vorobyov, D." +"---\nabstract: 'We introduce a new machine-learning-based approach, which we call the Independent Classifier networks (InClass nets) technique, for the *nonparameteric* estimation of conditional independence mixture models (CIMMs). We approach the estimation of a CIMM as a multi-class classification problem, since dividing the dataset into different categories naturally leads to the estimation of the mixture model. InClass nets consist of multiple independent classifier neural networks (NNs), each of which handles one of the variates of the CIMM. Fitting the CIMM to the data is performed by simultaneously training the individual NNs using suitable cost functions. The ability of NNs to approximate arbitrary functions makes our technique nonparametric. Further leveraging the power of NNs, we allow the conditionally independent variates of the model to be individually high-dimensional, which is the main advantage of our technique over existing non-machine-learning-based approaches. We derive some new results on the nonparametric identifiability of bivariate CIMMs, in the form of a necessary and a (different) sufficient condition for a bivariate CIMM to be identifiable. We provide a public implementation of InClass nets as a Python package called [[`RainDancesVI`]{}](\\raindancesurl) and validate our InClass nets technique with several worked out examples. Our method also has applications in unsupervised" +"---\nabstract: |\n Infinite server queues have ultimate processing power to accommodate explosive demand surges. We provide a new stability criterion based on the Borel-Cantelli lemma to judge whether the infinite server safely accommodates heavy-tailed demands. We illustrate the battles between heavy-tailed demand and infinite servers in detail. In particular, we show some cases where the explosive demand overwhelms the infinite server queue. The medical demand caused by pandemics such as the COVID-19 creates huge stress to the healthcare system. This framework indicates that healthcare systems need to account for the tail behavior of the cluster size and hospital stay length distributions to check the stability of their systems during pandemics.\n\n ==========\n\n \\[Popular Summary\\]\n\n Healthcare systems are under pressure due to the COVID-19 pandemic. Social gatherings may create clusters of infections, and the resulting patient stream causes a shortage of beds, medical supplies, and medical staff. In order to meet the explosive surge of medical demand, healthcare systems are reinforced, sometimes even by building temporary hospitals overnight.\n\n The infinite server queue is the ultimate model of such an idealized hospital that serves any number of patients without delay. We find new criteria for the stability of the infinite server queue:" +"---\nabstract: 'Intelligent reflecting surface (IRS) has emerged as an enabling technology to achieve smart and reconfigurable wireless communication environment cost-effectively. Prior works on IRS mainly consider its passive beamforming design and performance optimization without the inter-IRS signal reflection, which thus do not unveil the full potential of multi-IRS assisted wireless networks. In this paper, we study a double-IRS assisted multi-user communication system with the *cooperative* passive beamforming design that captures the multiplicative beamforming gain from the inter-IRS channel. Under the general channel setup with the co-existence of both double- and single-reflection links, we jointly optimize the (active) receive beamforming at the base station (BS) and the cooperative (passive) reflect beamforming at the two distributed IRSs (deployed near the BS and users, respectively) to maximize the minimum signal-to-interference-plus-noise ratio (SINR) of all users. Moreover, for the single-user and multi-user setups, we analytically show the superior performance of the double-IRS cooperative system over the conventional single-IRS system in terms of the maximum signal-to-noise ratio (SNR) and multi-user effective channel rank, respectively. Simulation results validate our analytical results and show the practical advantages of the proposed double-IRS system with cooperative passive beamforming designs.'\nbibliography:\n- 'IRS\\_MIMO.bib'\n---\n\nIntelligent reflecting surface (IRS), distributed" +"---\nabstract: 'We explore the cosmological implications at effective level of matter creation effects in a dissipative fluid for a FLRW geometry; we also perform a statistical analysis for this kind of model. By considering an inhomogeneous Ansatz for the particle production rate we obtain that for created matter of dark matter type we can have a quintessence scenario or a future singularity known as little rip; in dependence of the value of a constant parameter, $\\eta$, which characterizes the matter production effects. The dimensionless age of this kind of Universe is computed, showing that this number is greater than the standard cosmology value, this is typical of universes with presence of dark energy. The inclusion of baryonic matter is studied. By implementing the construction of the particle production rate for a dissipative fluid by considering two approaches for the expression of the bulk viscous pressure; we find that in Eckart model we have a big rip singularity leading to a catastrophic matter production and in the truncated version of the Israel-Stewart model such rate remains bounded leading to a quintessence scenario. For a non adiabatic dissipative fluid, we obtain a positive temperature and the cosmic expansion obeys the second" +"---\nabstract: 'This paper considers a nested stochastic distributed optimization problem. In it, approximate solutions to realizations of the inner-problem are leveraged to obtain a Distributed Stochastic Cubic Regularized Newton (DiSCRN) update to the decision variable of the outer problem. We provide an example involving electric vehicle users with various preferences which demonstrates that this model is appropriate and sufficiently complex for a variety of data-driven multi-agent settings, in contrast to non-nested models. The main two contributions of the paper are: (i) development of local stopping criterion for solving the inner optimization problem which guarantees sufficient accuracy for the outer-problem update, and (ii) development of the novel DiSCRN algorithm for solving the outer-problem and a theoretical justification of its efficacy. Simulations demonstrate that this approach is more stable and converges faster than standard gradient and Newton outer-problem updates in a highly nonconvex scenario, and we also demonstrate that the method extends to an EV charging scenario in which resistive battery losses and a time-of-use pricing model are considered over a time horizon.'\nauthor:\n- 'Tor Anderson Sonia Mart[\u00ed]{}nez'\nbibliography:\n- 'alias.bib'\n- 'SMD-add.bib'\n- 'JC.bib'\n- 'SM.bib'\ntitle: Distributed Stochastic Nested Optimization via Cubic Regularization \n---\n\nIntroduction\n============\n\n*Motivation.* As" +"---\nabstract: 'In this work we show the advantages of using the Coulomb-hole plus screened-exchange (COHSEX) approach in the calculation of potential energy surfaces. In particular, we demonstrate that, unlike perturbative $GW$ and partial self-consistent $GW$ approaches, such as eigenvalue-self-consistent $GW$ and quasi-particle self-consistent $GW$, the COHSEX approach yields smooth potential energy surfaces without irregularities and discontinuities. Moreover, we show that the ground-state potential energy surfaces (PES) obtained from the Bethe-Salpeter equation, within the adiabatic connection fluctuation dissipation theorem, built with quasi-particle energies obtained from perturbative COHSEX on top of Hartree-Fock (BSE@COHSEX@HF) yield very accurate results for diatomic molecules close to their equilibrium distance. When self-consistent COHSEX quasi-particle energies and orbitals are used to build the BSE equation the results become independent of the starting point. We show that self-consistency worsens the total energies but improves the equilibrium distances with respect to BSE@COHSEX@HF. This is mainly due to changes in the screening inside the BSE.'\nauthor:\n- 'J.\u00a0Arjan Berger'\n- 'Pierre-Fran\u00e7ois Loos'\n- Pina Romaniello\ntitle: 'Potential energy surfaces without unphysical discontinuities: the Coulomb-hole plus screened exchange approach'\n---\n\n![image](TOC.pdf)\n\nIntroduction\n============\n\nIn the last decade the $GW$ method [@Hedin_1965; @Aryasetiawan_1998; @Reining_2017; @Golze_2019] has become a standard tool in" +"---\nabstract: 'We investigate the statistical properties of eigenvalues of pseudo-Hermitian random matrices whose eigenvalues are real or complex conjugate. It is shown that when the spectrum splits into separated sets of real and complex conjugate eigenvalues, the real ones show characteristics of an intermediate incomplete spectrum, that is, of a so-called thinned ensemble. On the other hand, the complex ones show repulsion compatible with cubic-order repulsion of non normal matrices for the real matrices, but higher order repulsion for the complex and quaternion matrices.'\nauthor:\n- 'G. Marinello'\n- 'M. P. Pato'\nbibliography:\n- 'refs.bib'\ntitle: 'Statistical properties of eigenvalues of an ensemble of pseudo-Hermitian Gaussian matrices'\n---\n\nIntroduction\n============\n\nIt can be shown that a complex non-Hermitian Hamiltonian invariant under the combined parity ($\\mathcal{P}$) and time reversal ($\\mathcal{T}$) transformations have eigenvalues which are real or complex conjugate. A Hamiltonian with this so-called [$\\mathcal{PT}$]{}-symmetry is, for instance, $$H=p^2 -(ix)^{\\gamma} \n \\label{1}$$ whose properties have been analyzed in a seminal paper [@Bender1998]. It was found that, as a function of the parameter $\\gamma,$ for $\\gamma > 2,$ eigenvalues are real and, progressively, as $\\gamma$ decreases they move into the complex plane in conjugate pairs. This can be seen as a phase" +"---\nabstract: 'Pesticide application has been heavily used in the cultivation of major crops, contributing to the increase of crop production over the past decades. However, their appropriate use and calibration of machines rely upon evaluation methodologies that can precisely estimate how well the pesticides\u2019 spraying covered the crops. A few strategies have been proposed in former works, yet their elevated costs and low portability do not permit their wide adoption. This work introduces and experimentally assesses a novel tool that functions over a smartphone-based mobile application, named DropLeaf - Spraying Meter. Tests performed using DropLeaf demonstrated that, notwithstanding its versatility, it can estimate the pesticide spraying with high precision. Our methodology is based on image analysis, and the assessment of spraying deposition measures is performed successfully over real and synthetic water-sensitive papers. The proposed tool can be extensively used by farmers and agronomists furnished with regular smartphones, improving the utilization of pesticides with well-being, ecological, and monetary advantages. DropLeaf can be easily used for spray drift assessment of different methods, including emerging UAV (Unmanned Aerial Vehicle) sprayers.'\naddress:\n- 'Dalhousie University \u2013 Halifax, Nova Scotia, Canada'\n- 'University of Sao Paulo \u2013 Sao Carlos, SP, Brazil'\n- 'CNRS, Univ." +"---\nabstract: 'We previously proposed a method that allows for nonparallel voice conversion (VC) by using a variant of generative adversarial networks (GANs) called StarGAN. The main features of our method, called StarGAN-VC, are as follows: First, it requires no parallel utterances, transcriptions, or time alignment procedures for speech generator training. Second, it can simultaneously learn mappings across multiple domains using a single generator network and thus fully exploit available training data collected from multiple domains to capture latent features that are common to all the domains. Third, it can generate converted speech signals quickly enough to allow real-time implementations and requires only several minutes of training examples to generate reasonably realistic-sounding speech. In this paper, we describe three formulations of StarGAN, including a newly introduced novel StarGAN variant called \u201cAugmented classifier StarGAN (A-StarGAN)\u201d, and compare them in a nonparallel VC task. We also compare them with several baseline methods.'\nauthor:\n- 'Hirokazu\u00a0Kameoka,\u00a0 Takuhiro Kaneko, Kou Tanaka, and Nobukatsu Hojo[^1][^2]'\nbibliography:\n- 'Kameoka2018arXiv11.bib'\ntitle: Nonparallel Voice Conversion with Augmented Classifier Star Generative Adversarial Networks \n---\n\nVoice conversion (VC), nonparallel VC, multi-domain VC, generative adversarial networks (GANs), CycleGAN, StarGAN, A-StarGAN.\n\nIntroduction {#sec:intro}\n============\n\nVoice conversion (VC) is a task of" +"---\nabstract: 'We investigate the existence of double copy structure, or the lack thereof, in higher derivative operators for Nambu-Goldstone bosons. At the leading ${\\cal O}(p^2)$, tree amplitudes of Nambu-Goldstone bosons in the adjoint representation can be (trivially) expressed as the double copy of itself and the cubic bi-adjoint scalar theory, through the Kawai-Lewellen-Tye bilinear kernel. At the next-to-leading ${\\cal O}(p^4)$ there exist four operators in general, among which we identify one operator whose amplitudes exhibit the flavor-kinematics duality and can be written as the double copy of ${\\cal O}(p^2)$ Nambu-Goldstone amplitudes and the Yang-Mills+$\\phi^3$ theory, involving both gluons and gauged cubic bi-adjoint scalars. The specific operator turns out to coincide with the scalar ${\\cal O}(p^4)$ operator in the so-called extended Dirac-Born-Infeld theory, for which the aforementioned double copy relation holds more generally.'\nauthor:\n- 'Ian Low$^{\\, a,b}$, Laurentiu Rodina$\\, ^c$, Zhewei Yin$^{\\, b}$'\nbibliography:\n- 'references\\_amp.bib'\ntitle: 'Double Copy in Higher Derivative Operators of Nambu-Goldstone Bosons'\n---\n\nIntroduction\n============\n\nThe nonlinear sigma model (NLSM) [@GellMann:1960np; @Coleman:1969sm; @Callan:1969sn] is an effective field theory (EFT) of Nambu-Goldstone bosons (NGB\u2019s) arising from spontaneously broken symmetries. Recent developments in the modern S-matrix program have led to renewed interest in the NLSM, which is" +"---\nabstract: 'We consider the notion of information distance between two objects $x$ and $y$ introduced by Bennett, G\u00e1cs, Li, Vit\u00e1nyi, and Zurek\u00a0[@bglvz] as the minimal length of a program that computes $x$ from $y$ as well as computing $y$ from $x$. In this paper it was proven that the distance is equal to $\\max (\\operatorname{\\mathrm{K}\\mskip 1.2mu}(x{\\mskip 1mu|\\mskip 1mu}y),\\operatorname{\\mathrm{K}\\mskip 1.2mu}(y{\\mskip 1mu|\\mskip 1mu}x))$ up to additive logarithmic terms, and it was conjectured that this could not be improved to $O(1)$ precision. We revisit subtle issues in the definition and prove this conjecture. We show that if the distance is at least logarithmic in the length, then this equality does hold with $O(1)$ precision for strings of equal length. Thus for such strings, both the triangle inequality and the characterization hold with optimal precision. Finally, we extend the result to sets $S$ of bounded size. We show that for each constant\u00a0$s$, the shortest program that prints an $s$-element set $S \\subseteq \\{0,1\\}^n$ given any of its elements, has length at most $\\max_{w \\in S} \\operatorname{\\mathrm{K}\\mskip 1.2mu}(S {\\mskip 1mu|\\mskip 1mu}w) + O(1)$, provided this maximum is at least logarithmic in\u00a0$n$.'\nauthor:\n- 'Bruno Bauwens[^1]'\nbibliography:\n- 'bib.bib'\ntitle: Precise Expression" +"---\nabstract: 'In this paper, we first introduce the notions of checkerboard colourable minors for ribbon graphs motivated by the Eulerian ribbon graph minors, and two kinds of bipartite minors for ribbon graphs, one of which is the dual of the checkerboard colourable minors and the other is motivated by the bipartite minors of abstract graphs. Then we give an excluded minor characterization of the class of checkerboard colourable ribbon graphs, bipartite ribbon graphs, plane checkerboard colourable ribbon graphs and plane bipartite ribbon graphs.'\naddress:\n- 'School of Mathematical Sciences, Xiamen University, 361005, Xiamen, China'\n- 'School of Mathematical Sciences, Xiamen University, 361005, Xiamen, China'\n- 'School of Mathematical Sciences, Xiamen University, 361005, Xiamen, China'\nauthor:\n- Xia Guo\n- 'Xian\u2019an Jin'\n- Qi Yan\ntitle: Excluded checkerboard colourable ribbon graph minors\n---\n\n[^1]\n\nIntroduction\n============\n\nThe geometric dual is a fundamental concept in graph theory. It can be stated in the language of ribbon graphs as follows. For a ribbon graph $G$, its geometrical dual $G^{\\ast}$ is obtained by sewing discs, which will be the vertices of $G^{\\ast}$, into the boundary components of $G$, and removing the interiors, which will be the faces of $G^{\\ast}$, of all vertex discs" +"---\nabstract: 'The symmetries of the , the interacting boson model with $s$, $d$ and $g$ bosons, are studied as regards the occurrence of shapes with octahedral symmetry. It is shown that no Hamiltonian with a dynamical symmetry displays in its classical limit an isolated minimum with octahedral shape. However, a degenerate minimum that includes a shape with octahedral symmetry can be obtained from a Hamiltonian that is transitional between two limits, ${\\rm U}_g(9)\\otimes{\\rm U}_d(5)$ and ${\\rm SO}_{sg}(10)\\otimes{\\rm U}_d(5)$, and the conditions for its existence are derived. An isolated minimum with octahedral shape, either an octahedron or a cube, may arise through a modification of two-body interactions between the $g$ bosons. Comments on the observational consequences of this construction are made.'\naddress:\n- |\n Department of Physics, PRIMALAB Laboratory, Batna 1 University\\\n Route de Biskra, 05000 Batna, Algeria\n- |\n Grand Acc\u00e9l\u00e9rateur National d\u2019Ions Lourds, CEA/DRF-CNRS/IN2P3\\\n Bvd Henri Becquerel, F-14076 Caen, France\nauthor:\n- 'A.\u00a0Bouldjedri'\n- 'S.\u00a0Zerguine'\n- 'P.\u00a0Van\u00a0Isacker'\ntitle: |\n Higher-rank discrete symmetries in the IBM.\\\n II Octahedral shapes: Dynamical symmetries\n---\n\n,\n\nand\n\n,\n\ndiscrete octahedral symmetry ,interacting boson model ,$g$ bosons\n\n21.60.Ev ,21.60.Fw\n\nIntroduction {#s_intro}\n============\n\nThis paper is the second in the" +"---\nabstract: 'The state estimation of continuous-time nonlinear systems in which a subset of sensor outputs can be maliciously controlled through injecting a potentially unbounded additive signal is considered in this paper. Analogous to our earlier work for continuous-time linear systems in [@chong2015observability], we term the convergence of the estimates to the true states in the presence of sensor attacks as \u2018observability under $M$ attacks\u2019, where $M$ refers to the number of sensors which the attacker has access to. Unlike the linear case, we only provide a sufficient condition such that a nonlinear system is observable under $M$ attacks. The condition requires the existence of asymptotic observers which are robust with respect to the attack signals in an input-to-state stable sense. We show that an algorithm to choose a compatible state estimate from the state estimates generated by the bank of observers achieves asymptotic state reconstruction. We also provide a constructive method for a class of nonlinear systems to design state observers which have the desirable robustness property. The relevance of this study is illustrated on monitoring the safe operation of a power distribution network.'\nauthor:\n- 'Michelle S. Chong, Henrik Sandberg, Jo\u00e3o P.\u00a0Hespanha [^1] [^2] [^3] [^4]'\nbibliography:" +"---\nabstract: 'Blazar jets are extreme environments, in which relativistic proton interactions with an ultraviolet photon field could give rise to photopion production.\u00a0High-confidence associations of individual high-energy neutrinos with blazar flares could be achieved via spatially and temporally coincident detections. In 2017, the track-like, extremely high-energy neutrino event IC170922A was found to coincide with increased $\\gamma$-ray emission from the blazar TXS0506+056, leading to the identification of the most promising neutrino point source candidate so far. We calculate the expected number of neutrino events that can be detected with IceCube, based on a broadband parametrization of bright short-term blazar flares that were observed in the first 6.5 years of *Fermi*/LAT observations. We find that the integrated keV-to-GeV fluence of most individual blazar flares is far too small to yield a substantial Poisson probability for the detection of one or more neutrinos with IceCube. We show that the sample of potentially detectable high-energy neutrinos from individual blazar flares is rather small. We further show that the blazars 3C279 and PKS1510$-$089 dominate the all-sky neutrino prediction from bright and short-term blazar flares. In the end, we discuss strategies to search for more significant associations in future data unblindings of IceCube and KM3NeT.'" +"---\nabstract: 'We describe our top-team solution to Task 1 for Hindi in the HASOC contest organised by FIRE 2019. The task is to identify hate speech and offensive language in Hindi. More specifically, it is a binary classification problem where a system is required to classify tweets into two classes: (a) *Hate and Offensive (HOF)* and (b) *Not Hate or Offensive (NOT)*. In contrast to the popular idea of pretraining word vectors (a.k.a. word embedding) with a large corpus from a general domain such as Wikipedia, we used a relatively small collection of relevant tweets (i.e. random and sarcasm tweets in Hindi and Hinglish) for pretraining. We trained a Convolutional Neural Network (CNN) on top of the pretrained word vectors. This approach allowed us to be ranked first for this task out of all teams. Our approach could easily be adapted to other applications where the goal is to predict class of a text when the provided context is limited.'\nauthor:\n- Md Abul Bashar\n- Richi Nayak\nbibliography:\n- 'References.bib'\ntitle: 'QutNocturnal@HASOC\u201919: CNN for Hate Speech and Offensive Content Identification in Hindi Language'\n---\n\nIntroduction\n============\n\nThe \u201cHate Speech and Offensive Content Identification in Indo-European Languages\u201d track[^1] (HASOC)" +"---\nabstract: 'An emerging number of modern applications involve forecasting time series data that exhibit both short-time dynamics and long-time seasonality. Specifically, time series with multiple seasonality is a difficult task with comparatively fewer discussions. In this paper, we propose a two-stage method for time series with multiple seasonality, which does not require pre-determined seasonality periods. In the first stage, we generalize the classical seasonal autoregressive moving average (ARMA) model in multiple seasonality regime. In the second stage, we utilize an appropriate criterion for lag order selection. Simulation and empirical studies show the excellent predictive performance of our method, especially compared to a recently popular \u2018Facebook Prophet\u2019 model for time series.'\nauthor:\n- \n- \nbibliography:\n- 'bigdata.bib'\ntitle: Forecasting with Multiple Seasonality\n---\n\nTime series, Model selection, Multiple seasonality, Nowcasting.\n\nIntroduction\n============\n\nIn time series, seasonality is defined as the presence of variations that occur at specific regular intervals. Forecasting on time series with multiple seasonality that has different lengths of seasonality cycles is usually considered a difficult task. Therefore, detection and accommodation of the seasonality effect play an important role in time series forecasting.\n\nAmong all the forecasting techniques, the seasonal ARIMA model\u00a0[@SARIMA] and exponential smoothing technique \u00a0[@winter;" +"---\nabstract: 'We consider a one-dimensional trapped spin-1 Bose gas and numerically explore families of its solitonic solutions, namely antidark-dark-antidark (ADDAD), as well as dark-antidark-dark (DADD) solitary waves. Their existence and stability properties are systematically investigated within the experimentally accessible easy-plane ferromagnetic phase by means of a continuation over the atom number as well as the quadratic Zeeman energy. It is found that ADDADs are substantially more dynamically robust than DADDs. The latter are typically unstable within the examined parameter range. The dynamical evolution of both of these states is explored and the implication of their potential unstable evolution is studied. Some of the relevant observed possibilities involve, e.g., symmetry-breaking instability manifestations for the ADDAD, as well as splitting of the DADD into a right- and a left-moving dark-antidark pair with the anti-darks residing in a different component as compared to prior to the splitting. In the latter case, the structures are seen to disperse upon long-time propagation.'\nauthor:\n- 'C.-M. Schmied'\n- 'P. G. Kevrekidis'\ntitle: 'Dark-Antidark Spinor Solitons in Spin-1 Bose Gases'\n---\n\nIntroduction {#sec:Introduction}\n============\n\nSince their experimental realization two-and-a-half decades ago, Bose-Einstein condensates (BECs) have been of substantial interest due to their ability to provide a" +"---\nabstract: |\n We introduce a data distribution scheme for [${\\mathcal{H}}$-matrices]{} and a distributed-memory algorithm for [${\\mathcal{H}}$-matrix]{}-vector multiplication. Our data distribution scheme avoids an expensive $\\Omega(P^2)$ scheduling procedure used in previous work, where $P$ is the number of processes, while data balancing is well-preserved. Based on the data distribution, our distributed-memory algorithm evenly distributes all computations among $P$ processes and adopts a novel tree-communication algorithm to reduce the latency cost. The overall complexity of our algorithm is $O\\Big(\\frac{N \\log\n N}{P} + \\alpha \\log P + \\beta \\log^2 P \\Big)$ for [${\\mathcal{H}}$-matrices]{} under weak admissibility condition, where $N$ is the matrix size, $\\alpha$ denotes the latency, and $\\beta$ denotes the inverse bandwidth. Numerically, our algorithm is applied to address both two- and three-dimensional problems of various sizes among various numbers of processes. On thousands of processes, good parallel efficiency is still observed.\naddress: |\n \u00a0Department of Mathematics, Duke University, Durham, NC 27708, USA.\\\n \u00a0Hodge Star, Toronto, Canada.\\\n \u00a0Department of Mathematics and ICME, Stanford University, Stanford, CA 94305, USA.\\\nauthor:\n- 'Yingzhou Li, Jack Poulson\u00a0and Lexing Ying'\nbibliography:\n- 'dmhm.bib'\ntitle: |\n Distributed-memory [${\\mathcal{H}}$-matrix]{} Algebra I:\\\n Data distribution and matrix-vector multiplication\n---\n\nIntroduction\n============\n\nFor linear elliptic partial differential equations, the" +"---\nauthor:\n- |\n Dongming Han, Wei Chen, Rusheng Pan, Yijing Liu, Jiehui Zhou, Ying Xu, Tianye Zhang,\\\n Changjie Fan, Jianrong Tao, and Xiaolong (Luke) Zhang\nbibliography:\n- 'ref-0430-zjh.bib'\ntitle: '[GraphFederator]{}: Federated Visual Analysis for Multi-party Graphs'\n---\n\nIntroduction {#intro}\n============\n\nVisual analysis of multi-party graphs plays an important role in helping us understand real-world complex data \u00a0[@von2011visual; @wang2018visual; @cao2015g], such as ego-network analysis in social media\u00a0[@wu2016survey; @zhao2016egocentric], disease diagnosis in healthcare\u00a0[@liu2016graph], and anomaly detection in public security\u00a0[@cao2015targetvue; @zhang2017survey]. Various features or models extracted from multi-party graphs can be integrated to support a comprehensive understanding of the entire graph data. Using the integrated information, we can conduct more comprehensive investigations. For instance, by combining knowledge graphs of patients and diseases from multiple hospitals, doctors can gain a deeper understanding of diseases and develop best treatment plans.\n\nOne main bottleneck of exploiting multi-party graphs is data accessibility. Early studies on graph visual analysis assume that the graph data is freely accessible. Currently, however, more and more graph data are distributed (e.g., on servers in different organizations). To analyze such data, we need to combine multi-party graphs and examine them as an entirety. Considering of privacy and security, raw" +"---\nabstract: 'We consider the out-of-equilibrium behavior of a general class of mesoscopic devices composed of several superconducting or/and normal metal leads separated by quantum dots. Starting from a microscopic Hamiltonian description, we provide a non-perturbative approach to quantum electronic transport in the tunneling amplitudes between dots and leads: using the equivalent of a path integral formulation, the lead degrees of freedom are integrated out in order to compute both the current and the current correlations (noise) in this class of systems, in terms of the dressed Green\u2019s function matrix of the quantum dots. In order to illustrate the efficiency of this formalism, we apply our results to the \u201call superconducting Cooper pair beam splitter\u201d, a device composed of three superconducting leads connected via two quantum dots, where crossed Andreev reflection operates Cooper pair splitting. Commensurate voltage differences between the three leads allow to obtain expressions for the current and noise as a function of the Keldysh Nambu Floquet dressed Green\u2019s function of the dot system. This voltage configuration allows the occurrence of non-local processes involving multiple Cooper pairs which ultimately lead to the presence of non-zero DC currents in an out-of-equilibrium situation. We investigate in details the results for" +"---\nabstract: 'A folded disk is bistable, as it can be popped through to an inverted state with elastic energy localized in a small, highly-deformed region on the fold. Cutting out this singularity relaxes the surrounding material and leads to a loss of bistability when the hole dimensions reach a critical size. These dimensions are strongly anisotropic and feature a surprising re-entrant behavior, such that removal of additional material can re-stabilize the inverted state. A model of the surface as a wide annular developable strip is found to capture the qualitative observations in experiments and simulations. These phenomena are consequential to the mechanics and design of crumpled elastic sheets, developable surfaces, origami and kirigami, and other deployable and compliant structures.'\nauthor:\n- 'T. Yu'\n- 'I. Andrade-Silva'\n- 'M. A. Dias'\n- 'J. A. Hanna'\ntitle: Cutting holes in bistable folds\n---\n\nThe role of elastic singularities in the deformation of thin sheets and shells is still poorly understood, despite a quarter of a century of intense investigation into their geometry and energetics [@AmirbayatHearle86-1; @AmirbayatHearle86-2; @BenAmarPomeau97; @Chaieb98; @CerdaMahadevan98; @MoraBoudaoud02; @lobkovsky1995scaling; @DiDonna02; @LiangWitten05; @FarmerCalladine05; @Nasto13; @ChopinKudrolli16; @Yang18; @Moshe19; @Elder19]. Over the years, several perspectives have emerged, viewing these localized high-energy regions" +"---\nabstract: |\n We can use a hybrid memory system consisting of DRAM and Intel Optane DC Persistent Memory (We call it\\\n \u201cDCPM\u201d in this paper) as DCPM is now commercially available since April 2019. Even if the latency for DCPM is several times higher than that for DRAM, the capacity for DCPM is several times higher than that for DRAM and the cost of DCPM is also several times lower than that for DRAM. In addition, DCPM is non-volatile. A Server with this hybrid memory system could improve the performance for in-memory database systems and virtual machine (VM) systems because these systems often consume a large amount of memory. Moreover, a high-speed shared storage system can be implemented by accessing DCPM via remote direct memory access (RDMA). I assume that some of the DCPM is often assigned as a shared area among other remote servers because applications executed on a server with a hybrid memory system often cannot use the entire capacity of DCPM. This paper evaluates the interference between local memory access and RDMA from a remote server. As a result, I indicate that the interference on this hybrid memory system is significantly different from that on a" +"---\nauthor:\n- Pierre Gratier\n- J\u00e9r\u00f4me Pety\n- Emeric Bron\n- Antoine Roueff\n- 'Jan H. Orkisz'\n- Maryvonne Gerin\n- Victor de Souza Magalhaes\n- Mathilde Gaudel\n- Maxime Vono\n- S\u00e9bastien Bardeau\n- Jocelyn Chanussot\n- Pierre Chainais\n- 'Javier R. Goicoechea'\n- 'Viviana V. Guzm\u00e1n'\n- Annie Hughes\n- Jouni Kainulainen\n- David Languignon\n- Jacques Le Bourlot\n- Franck Le Petit\n- Fran\u00e7ois Levrier\n- Harvey Liszt\n- Nicolas Peretto\n- Evelyne Roueff\n- Albrecht Sievers\nbibliography:\n- 'ms.bib'\ntitle: |\n Quantitative inference of the [[[$\\mathrm{H_2}$]{}]{}]{}\u00a0column densities\\\n from 3mm molecular emission: A case study towards Orion B\n---\n\n[Molecular hydrogen being unobservable in cold molecular clouds, the column density measurements of molecular gas currently rely either on dust emission observation in the far-IR, which requires space telescopes, or on star counting, which is limited in angular resolution by the stellar density. (Sub-)millimeter observations of numerous trace molecules are effective from ground based telescopes, but the relationships between the emission of one molecular line and the [[[$\\mathrm{H_2}$]{}]{}]{} column density is non-linear and sensitive to excitation conditions, optical depths, abundance variations due to the underlying physico-chemistry.]{} [We aim to use multi-molecule line emission to infer the" +"---\nabstract: 'Fusion and advanced fission power plants require advanced nuclear materials to function under new, extreme environments. Understanding the evolution of mechanical and functional properties during radiation damage is essential to the design and commercial deployment of these systems. The shortcomings of existing methods could be addressed by a new technique - intermediate energy proton irradiation (IEPI) - using beams of 10 - 30 MeV protons to rapidly and uniformly damage bulk material specimens before direct testing of engineering properties. IEPI is shown to achieve high fidelity to fusion and fission environments in both primary damage production and transmutation, often superior to nuclear reactor or typical (low-range) ion irradiation. Modeling demonstrates that high dose rates (0.1\u20131\u00a0DPA/per day) can be achieved in bulk material specimens (100\u2013) with low temperature gradients and induced radioactivity. The capabilities of IEPI are demonstrated through a 12 MeV proton irradiation and tensile test of thick tensile specimens of a nickel alloy (Alloy 718), reproducing neutron-induced data. These results demonstrate that IEPI enables high throughput assessment of materials under reactor-relevant conditions, positioning IEPI to accelerate the pace of engineering-scale radiation damage testing and allow for quicker and more effective design of nuclear energy systems.'\naddress:" +"---\nabstract: 'In determining the gravitational signal of cusps from a network of cosmic strings loops, a number of key parameters have to be assumed. These include the typical number of cusps per period of string oscillation and the typical values of the sharpness parameters of left and right moving waves on the string, evaluated at the cusp event. Both of these are important, as the power stored in the gravitational waves emitted from the loops of string is proportional to the number of cusps per period, and inversely proportional to the product of the sharpness parameters associated with the left and right moving modes on the string. In suitable units both of these quantities are usually thought to be of order unity. In order to try and place these parameters on a more robust footing, we analyse in detail a large number of randomly chosen loops of string that can have high harmonics associated with them, such as one might expect to form by chopping off an infinite string in the early universe. This allows us to analyse tens of thousands of loops and obtain detailed statistics on these crucial parameters. While we find in general the sharpness parameters" +"---\nabstract: 'The $p$-tensor Ising model is a one-parameter discrete exponential family for modeling dependent binary data, where the sufficient statistic is a multi-linear form of degree $p {\\geqslant}2$. This is a natural generalization of the matrix Ising model, that provides a convenient mathematical framework for capturing, not just pairwise, but higher-order dependencies in complex relational data. In this paper, we consider the problem of estimating the natural parameter of the $p$-tensor Ising model given a single sample from the distribution on $N$ nodes. Our estimate is based on the maximum pseudo-likelihood (MPL) method, which provides a computationally efficient algorithm for estimating the parameter that avoids computing the intractable partition function. We derive general conditions under which the MPL estimate is $\\sqrt N$-consistent, that is, it converges to the true parameter at rate $1/\\sqrt N$. Our conditions are robust enough to handle a variety of commonly used tensor Ising models, including spin glass models with random interactions and models where the rate of estimation undergoes a phase transition. In particular, this includes results on $\\sqrt N$-consistency of the MPL estimate in the well-known $p$-spin Sherrington-Kirkpatrick (SK) model, spin systems on general $p$-uniform hypergraphs, and Ising models on the hypergraph stochastic" +"---\nabstract: 'Massive Dirac particles are a superposition of left and right chiral components. Since chirality is not a conserved quantity, the free Dirac Hamiltonian evolution induces chiral quantum oscillations, a phenomenon related to the *Zitterbewegung*, the trembling motion of free propagating particles. While not observable for particles in relativistic dynamical regimes, chiral oscillations become relevant when the particle\u2019s rest energy is comparable to its momentum. In this paper, we quantify the effect of chiral oscillations on the non-relativistic evolution of a particle state described as a Dirac bispinor and specialize our results to describe the interplay between chiral and flavor oscillations of non-relativistic neutrinos: we compute the time-averaged survival probability and observe an energy-dependent depletion of the quantity when compared to the standard oscillation formula. In the non-relativistic regime, this depletion due to chiral oscillations can be as large as 40$\\%$. Finally, we discuss the relevance of chiral oscillations in upcoming experiments which will probe the cosmic neutrino background.'\nauthor:\n- 'Victor A. S. V. Bittencourt[^1],$^{\\hspace{0.3mm}1}$Alex E. Bernardini[^2],$^{\\hspace{0.3mm}2}$Massimo Blasone[^3]$^{\\hspace{0.3mm}3,4}$'\ntitle: 'Chiral oscillations in the non-relativistic regime'\n---\n\n-1.0 truecm\n\nIntroduction {#intro}\n============\n\nDirac equation has unique dynamical predictions for fermionic particles, from the Klein paradox [@Klein:1929; @Itzykson], related to" +"---\nabstract: 'We consider the Cauchy problem for doubly nonlinear degenerate parabolic equations with inhomogeneous density on noncompact Riemannian manifolds. We give a qualitative classification of the behavior of the solutions of the problem depending on the behavior of the density function at infinity and the geometry of the manifold, which is described in terms of its isoperimetric function. We establish for the solutions properties as: stabilization of the solution to zero for large times, finite speed of propagation, universal bounds of the solution, blow up of the interface. Each one of these behaviors of course takes place in a suitable range of parameters, whose definition involves a universal geometrical characteristic function, depending both on the geometry of the manifold and on the asymptotics of the density at infinity.'\naddress:\n- |\n Department of Basic and Applied Sciences for Engineering\\\n Sapienza University of Rome\\\n via A. Scarpa 16 00161 Rome, Italy\n- |\n South Mathematical Institute of VSC RAS\\\n Vladikavkaz, Russian Federation\nauthor:\n- Daniele Andreucci\n- 'Anatoli F. Tedeev'\nbibliography:\n- 'paraboli.bib'\n- 'pubblicazioni\\_andreucci.bib'\ntitle: Asymptotic properties of solutions to the Cauchy problem for degenerate parabolic equations with inhomogeneous density on manifolds\n---\n\n\\[1994/06/01\\]\n\n[^1]\n\n[^2] [^3]\n\nIntroduction {#s:intro}" +"---\nabstract: 'Game theory provides a paradigm through which we can study the evolving communication and phenomena that occur via rational agent interaction [@phd10-Umm]. In this work, we design a model framework and explore the Volunteer\u2019s Dilemma with the goals of 1) modeling it as a stochastic concurrent multiplayer game, 2) constructing properties to verify model correctness and reachability, 3) constructing strategy synthesis graphs to understand how the game is iteratively stepped through most optimally and, 4) analyzing a series of parameters to understand correlations with expected local and global rewards over a finite time horizon.'\nauthor:\n- |\n Jacob Dineen, A S M Ahsan-Ul Haque, Matthew Bielskas\\\n Department of Computer Science, University of Virginia, Charlottesville, VA 22904\\\n [@virginia.edu]{}\nbibliography:\n- 'references.bib'\ntitle: 'Formal Methods for an Iterated Volunteer\u2019s Dilemma'\n---\n\n=1\n\nIntroduction\n============\n\nWe propose an iterated version of Volunteer\u2019s Dilemma game through PRISM Model Checker (PRISM henceforth). This is useful because with this software, one can easily tune game parameters to get intuition of game dynamics. This can allow us to see what setting changes correlate with change in expected reward for each player. Additionally, PRISM can provide us a probabilistic graph that reflects a strategy that is" +"---\nabstract: 'In this paper, a construction of $(n,k,\\delta)$ LDPC convolutional codes over arbitrary finite fields, which generalizes the work of Robinson and Bernstein and the later work of Tong is provided. The sets of integers forming a $(k,w)$-(weak) difference triangle set are used as supports of some columns of the sliding parity-check matrix of an $(n,k,\\delta)$ convolutional code, where $n\\in{\\mathbb{N}}$, $n>k$. The parameters of the convolutional code are related to the parameters of the underlying difference triangle set. In particular, a relation between the free distance of the code and $w$ is established as well as a relation between the degree of the code and the scope of the difference triangle set. Moreover, we show that some conditions on the weak difference triangle set ensure that the Tanner graph associated to the sliding parity-check matrix of the convolutional code is free from $2\\ell$-cycles not satisfying the full rank condition over any finite field. Finally, we relax these conditions and provide a lower bound on the field size, depending on the parity of $\\ell$, that is sufficient to still avoid $2\\ell$-cycles. This is important for improving the performance of a code and avoiding the presence of low-weight codewords and absorbing" +"---\nabstract: 'This paper describes a system developed for detecting propaganda techniques from news articles. We focus on examining how emotional salience features extracted from a news segment can help to characterize and predict the presence of propaganda techniques. Correlation analyses surfaced interesting patterns that, for instance, the \u201cloaded language\" and \u201cslogan\" techniques are negatively associated with valence and joy intensity but are positively associated with anger, fear and sadness intensity. In contrast, \u201cflag waving\" and \u201cappeal to fear-prejudice\" have the exact opposite pattern. Through predictive experiments, results further indicate that whereas BERT-only features obtained F1-score of 0.548, emotion intensity features and BERT hybrid features were able to obtain F1-score of 0.570, when a simple feedforward network was used as the classifier in both settings. On gold test data, our system obtained micro-averaged F1-score of 0.558 on overall detection efficacy over fourteen propaganda techniques. It performed relatively well in detecting \u201cloaded language\" (F1 = 0.772), \u201cname calling and labeling\" (F1 = 0.673), \u201cdoubt\" (F1 = 0.604) and \u201cflag waving\" (F1 = 0.543).'\nauthor:\n- |\n Gangeshwar Krishnamurthy, Raj Kumar Gupta, Yinping Yang\\\n Institute of High Performance Computing (IHPC),\\\n Agency for Science, Technology and Research (A\\*STAR), Singapore\\\n [{gangeshwark,gupta-rk,yangyp}@ihpc.a-star.edu.sg]{}\\\nbibliography:\n- 'semeval2020.bib'" +"---\nabstract: 'Computer models play a key role in many scientific and engineering problems. One major source of uncertainty in computer model experiment is input parameter uncertainty. Computer model calibration is a formal statistical procedure to infer input parameters by combining information from model runs and observational data. The existing standard calibration framework suffers from inferential issues when the model output and observational data are high-dimensional dependent data such as large time series due to the difficulty in building an emulator and the non-identifiability between effects from input parameters and data-model discrepancy. To overcome these challenges we propose a new calibration framework based on a deep neural network (DNN) with long-short term memory layers that directly emulates the inverse relationship between the model output and input parameters. Adopting the \u2018learning with noise\u2019 idea we train our DNN model to filter out the effects from data model discrepancy on input parameter inference. We also formulate a new way to construct interval predictions for DNN using quantile regression to quantify the uncertainty in input parameter estimates. Through a simulation study and real data application with WRF-hydro model we show that our approach can yield accurate point estimates and well calibrated interval estimates" +"---\nabstract: 'We present a quantitative theory of the suppression of the optical linewidth due to charge fluctuation noise in a *p*\u2013*n* diode, recently observed in Anderson *et al.*, Science **366**, 1225 (2019). We connect the local electric field with the voltage across the diode, allowing for the identification of the defect depth from the experimental threshold voltage. Furthermore, we show that an accurate description of the decoherence of such spin centers requires a complete spin\u20131 formalism that yields a bi-exponential decoherence process, and predict how reduced charge fluctuation noise suppresses the spin center\u2019s decoherence rate.'\nauthor:\n- 'Denis R. Candido'\n- 'Michael E. Flatt\u00e9'\nbibliography:\n- 'apssamp.bib'\nnocite: '[@*]'\ntitle: 'Suppression of the optical linewidth and spin decoherence of a quantum spin center in a *p*\u2013*n* diode'\n---\n\nIntroduction\n============\n\nThe role of the environment on the optical linewidth and spin decoherence of an optically-accessible spin center is well-known to be significant, and a major step forward in reducing environmental effects has been achieved recently by placing a spin center in a semiconductor *p*\u2013*n* diode wherein the effects of charge fluctuations can be suppressed[@anderson2019electrical]. Considerable effort has already been devoted to optimize such optically-accessible quantum-coherent spin centers associated with" +"---\nabstract: 'This article presents a whirlwind tour of some results surrounding the *Koebe-Andre\u2019ev-Thurston Theorem*, Bill Thurston\u2019s seminal circle packing theorem that appears in Chapter 13 of *The Geometry and Topology of Three-Manifolds*.'\nauthor:\n- 'Philip L. Bowers'\nbibliography:\n- 'OurBib.bib'\ntitle: |\n Combinatorics Encoding Geometry:\\\n The Legacy of Bill Thurston in the Story of One Theorem\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nBill Thurston was the most original and influential topologist of the last half-century. His impact on the discipline of geometric topology during that time is unsurpassed, and his insights in the topology and geometry of three-manifolds led finally to the resolution of the most celebrated problem of topology over the last century\u2014The Poincar\u00e9 Conjecture. He made fundamental contributions to many sub-disciplines within geometric topology, from the theory of foliations on manifolds to the combinatorial structure of rational maps on the two-sphere, and from geometric and automatic group theory to classical polyhedral geometry. Of course his foundational work on three-manifolds, first laid out in his courses at Princeton in the late nineteen-seventies, compiled initially as a Princeton paper-back monograph inscribed by Bill Floyd and available upon request as *The Geometry and Topology of Three-Manifolds* (GTTM)\u00a0[@Thurston:1980], and maturing as" +"---\nabstract: |\n We study the shadow behaviors of five dimensional (5D) black holes embedded in type IIB superstring/supergravity inspired spacetimes by considering solutions with and without rotations. Geometrical properties as shapes and sizes are analyzed in terms of the D3-brane number and the rotation parameter. Concretely, we find that the shapes are indeed significantly distorted by such physical parameters and the size of the shadows decreases with the brane or \u201ccolor\u201d number and the rotation. Then, we investigate geometrical observables and energy emission rate aspects.\n\n [**Keywords**]{}: Black holes, Shadows, D3-branes, Type IIB superstring theory.\nauthor:\n- |\n A. Belhaj$^{1}$[^1], H. Belmahi$^{1}$, M. Benali$^{1}$, W. El Hadri$^{1}$, H. El Moumni$^{2}$[^2], E. Torrente-Lujan$^{3}$[^3][^4]\\\n [$^1$ D\u00e9partement de Physique, Equipe des Sciences de la mati\u00e8re et du rayonnement, ESMaR]{}\\\n [Facult\u00e9 des Sciences, Universit\u00e9 Mohammed V de Rabat, Rabat, Morocco]{}\\\n [$^{2}$ EPTHE, Physics Department, Faculty of Science, Ibn Zohr University, Agadir, Morocco]{}\\\n [$^{3}$ IFT, Dep. de F\u00edsica, Univ. de Murcia, Campus de Espinardo, E-30100 Murcia, Spain]{}\\\ntitle: |\n FISPAC-TH/271/2020\\\n UQBAR-TH/2020-091\n\n [**Shadows of 5D Black Holes from String Theory** ]{}\n---\n\nIntroduction\n============\n\nBlack holes have been extensively investigated in connections with many gravity theories. Such investigations have been boosted by recent direct observations, the" +"---\nabstract: 'This article contains a complete proof of Gabrielov\u2019s rank Theorem, a fundamental result in the study of analytic map germs. Inspired by the works of Gabrielov and Tougeron, we develop formal-geometric techniques which clarify the difficult parts of the original proof. These techniques are of independent interest, and we illustrate this by adding a new (very short) proof of the Abhyankar-Jung Theorem. We include, furthermore, new extensions of the rank Theorem (concerning the Zariski main Theorem and elimination theory) to commutative algebra.'\naddress: 'Universit\u00e9 Aix-Marseille, Institut de Math\u00e9matiques de Marseille (UMR CNRS 7373), Centre de Math\u00e9matiques et Informatique, 39 rue F. Joliot Curie, 13013 Marseille, France'\nauthor:\n- Andr\u00e9 Belotto da Silva\n- Octave Curmi\n- Guillaume Rond\ntitle: 'A proof of A. Gabrielov\u2019s rank Theorem'\n---\n\n[^1]\n\nIntroduction\n============\n\nThis article contains a complete and self-contained proof of Gabrielov\u2019s rank Theorem, a fundamental result in the study of analytic map germs. Let us briefly present its context and the theorem.\n\nLet ${\\varphi}:({\\mathbb{K}}^n,0){\\longrightarrow}({\\mathbb{K}}^m,0)$ be an analytic map germ of generic rank $r$ over the field ${\\mathbb{K}}$ of real or complex numbers, that is, the image of ${\\varphi}$ is generically a submanifold of ${\\mathbb{K}}^m$ of dimension $r$. When ${\\varphi}$" +"---\nabstract: 'A recommender system generates personalized recommendations for a user by computing the preference score of items, sorting the items according to the score, and filtering top-$K$ items with high scores. While sorting and ranking items are integral for this recommendation procedure, it is nontrivial to incorporate them in the process of end-to-end model training since sorting is nondifferentiable and hard to optimize with gradient descent. This incurs the inconsistency issue between existing learning objectives and ranking metrics of recommenders. In this work, we present DRM (differentiable ranking metric) that mitigates the inconsistency and improves recommendation performance by employing the differentiable relaxation of ranking metrics. Via experiments with several real-world datasets, we demonstrate that the joint learning of the DRM objective upon existing factor based recommenders significantly improves the quality of recommendations, in comparison with other state-of-the-art recommendation methods.'\nauthor:\n- 'Hyunsung Lee^1^, Yeongjae Jang^2^, Jaekwang Kim^3^ Honguk Woo^4^\\'\nbibliography:\n- 'main.bib'\ntitle: |\n A Differentiable Ranking Metric Using Relaxed\\\n Sorting Operation for Top-K Recommender Systems\n---\n\nIntroduction\n============\n\nWith the massive growth of online content, it has become common for online content platforms to operate recommender systems that provide personalized recommendations, aiming to facilitate better user experiences and" +"---\nauthor:\n- Ke Li\n- Zilin Xiang\n- Tao Chen\n- Kay Chen Tan\nbibliography:\n- 'IEEEabrv.bib'\n- 'reference.bib'\ntitle: '**BiLO-CPDP: Bi-Level Programming for Automated Model Discovery in Cross-Project Defect Prediction**[^1]'\n---\n\n[**Keywords:** ]{}Cross-project defect prediction, transfer learning, classification techniques, automated parameter optimization, configurable software and tool\n\nIntroduction {#sec:introduction}\n============\n\nSoftware defects are errors in code and its logic that cause a software product to malfunction or to produce incorrect/unexpected results. Given that software systems become increasingly ubiquitous in our modern society, software defects are highly likely to result in disastrous consequences to businesses and daily lives. For example, the latest *Annual Software Fail Watch* report from Tricentis[^2] shows that, globally, software defects/failures affected over 3.7 billion people and caused \\$1.7 trillion in lost revenue.\n\nOne of the key reasons behind the prevalent defects in modern software systems is their increasingly soaring size and complexity. Due to the limited resource for software quality assurance and the intrinsic dependency among a large number of software modules, it is expensive, if not impossible, to rely on human efforts (e.g., code review) to thoroughly inspect software defects. Instead, it is more pragmatic to predict the defect-prone software modules to which software engineers" +"---\nabstract: 'We present `megas`, a new software tool that improves the simulation of the Micromegas gas detectors. Our tool offers the possibility to configure multiple arrangements with one or more layers of MM detectors. A series of simple commands can easily modify the constructive properties of each of these detectors, such as dimensions, gas composition, and electric field. `megas` is based on `Betaboltz` [@renda2020betaboltz; @betaboltz_git], an open-source library that simulates the step by step movement of electrons in an electric field using Monte-Carlo methods. A real-life example was simulated, and to validate our application, we perform a detailed analysis of our data compared with the actual experimental results. We present a selection of data obtained simulating real user-case scenarios, compared with results available in the literature, especially in the upgrade phase of the Muon spectrometer of the ATLAS detector at LHC.'\naddress:\n- 'Faculty of Physics, University of Bucharest, Bucharest - M\u0103gurele, Romania '\n- 'Departament of Elementary Particle Physics, IFIN-HH, Reactorului 30, RO-077125, P.O.B. MG-6, M\u0103gurele, Romania'\nauthor:\n- Dan Andrei Ciubotaru\n- Michele Renda\nbibliography:\n- 'bibliography.bib'\ntitle: 'megas: development and validation of a new simulation tool for the Micromegas detectors.'\n---\n\ngas detectors ,micromegas ,simulation 00-01,99-00" +"---\nabstract: 'We discuss the effects of spatial interference between two infectious hotspots as a function of the mobility of individuals (wind speed) between the two and their relative degree of infectivity. As long as the upstream hotspot is less contagious than the downstream one, increasing the wind speed leads to a monotonic decrease of the infection peak in the downstream hotspot. Once the upstream hotspot becomes about between twice and five times more infectious than the downstream one, an optimal wind speed emerges, whereby a local minimum peak intensity is attained in the downstream hotspot, along with a local maximum beyond which the beneficial effect of the wind is restored. Since this non-monotonic trend is reminiscent of the equation of state of non-ideal fluids, we dub the above phenomena \u201cepidemic condensation\u201d. When the relative infectivity of the upstream hotspot exceeds about a factor five, the beneficial effect of the wind above the optimal speed is completely lost: any wind speed above the optimal one leads to a higher infection peak. It is also found that spatial correlation between the two hotspots decay much more slowly than their inverse distance. It is hoped that the above findings may offer a" +"---\nauthor:\n- Tom\u00e1\u0161 Brauner\nbibliography:\n- 'references.bib'\ntitle: |\n Exceptional nonrelativistic effective field theories\\\n with enhanced symmetries\n---\n\n=1\n\nIntroduction {#sec:intro}\n============\n\nEffective field theory (EFT) is a general framework that allows one to focus on the physics relevant at a given energy scale, and to dispense with irrelevant microscopic details. This approach is especially powerful in case of physical systems, possessing an ordered ground state, where the low-energy physics is dominated by collective modes: the Nambu-Goldstone (NG) bosons of the symmetry spontaneously broken by the order parameter. In case of spontaneously broken internal, coordinate-independent symmetries, the construction of EFT for NG bosons is by now well-understood in both relativistic\u00a0[@Coleman1969a; @Callan1969a; @Leutwyler1994b] and nonrelativistic\u00a0[@Leutwyler1994a; @Watanabe2014a; @Andersen2014a] systems. It has provided key insight into issues such as the geometry of spontaneous symmetry breaking, coupling of NG bosons to external fields or other dynamical degrees of freedom, and the counting of NG bosons (see refs.\u00a0[@Brauner2010a; @Watanabe2020a; @Beekman2019a; @AlvarezGaume2020a] for a review of the latter subject).\n\nThe case of spontaneously broken spacetime, or more generally coordinate-dependent, symmetries is considerably more subtle. The most striking difference to coordinate-independent symmetries is that some broken symmetries now need not give rise to" +"---\nauthor:\n- 'P.\u00a0Van\u00a0Isacker[^1]'\ntitle: A solvable model for octupole phonons\n---\n\nIntroduction {#s_intro}\n============\n\nNuclei with a closed-shell configuration for neutrons and/or protons frequently exhibit low-energy excitations with angular momentum $J=3$ and negative parity. Such excitations are associated with nuclear shapes that break reflection symmetry and, in particular, with pear-like or octupole shapes\u00a0[@Butler96]. Given the closed-shell configuration of at least one type of nucleon, the nucleus is thought to have a spherical equilibrium shape in its ground state and to exhibit reflection asymmetric oscillations of the octupole type around that shape. Nuclei with neutrons [*and*]{} protons in the valence shell may acquire a permanent ground-state deformation and an open question is whether they can assume a permanent pear-like deformation. Interest in this question was rekindled in 2013 by observed indications of such static octupole deformation in the ground-state configuration of $^{224}$Ra\u00a0[@Gaffney13].\n\nBy virtue of their supposed collective structure, octupole excitations are thought to exhibit phonon-like behaviour\u00a0[@BM75], which renders them of particular interest, being at the cross-roads of microscopic and collective descriptions of nuclei. Consequently, many models of nuclear octupole excitations have been considered in the past (for a review, see Ref.\u00a0[@Butler96]). From a" +"---\nabstract: 'Following recent advancements, we consider a scenario of multipartite postquantum steering and general no-signaling assemblages. We introduce the notion of the edge of the set of no-signaling assemblages and we present its characterization. Next, we use this concept to construct witnesses for no-signaling assemblages without an LHS model. Finally, in the simplest nontrivial case of steering with two untrusted subsystems, we discuss the possibility of quantum realization of assemblages on the edge. In particular, for three-qubit states, we obtain a no-go type result, which states that it is impossible to produce assemblage on the edge using measurements described by POVMs as long as the rank of a given state is greater than or equal to 3.'\nauthor:\n- 'Micha[\u0142]{} Banacki'\n- 'Ricard Ravell Rodr[\u00ed]{}guez'\n- 'Pawe[\u0142]{} Horodecki'\ntitle: 'On the edge of the set of no-signaling assemblages'\n---\n\nIntroduction\n============\n\nQuantum theory provides us with phenomena going beyond any classical description or intuition. The most striking example of this statement is the possibility of obtaining correlations that cannot be explained by any local and realistic theory [@EPR1935; @Bell]. Another non-classical phenomenon of quantum mechanics is encapsulated in the idea of quantum steering, proposed by von Neumann [@S36] and" +"---\nabstract: |\n Recent years have witnessed the success of adaptive (or unified) approaches in estimating symmetric properties of discrete distributions, where the learner first obtains a distribution estimator independent of the target property, and then plugs the estimator into the target property as the final estimator. Several such approaches have been proposed and proved to be adaptively optimal, i.e. they achieve the optimal sample complexity for a large class of properties within a low accuracy, especially for a large estimation error $\\varepsilon\\gg n^{-1/3}$ where $n$ is the sample size.\n\n In this paper, we characterize the high accuracy limitation, or the penalty for adaptation, for general adaptive approaches. Specifically, we obtain the first known adaptation lower bound that under a mild condition, any adaptive approach cannot achieve the optimal sample complexity for every $1$-Lipschitz property within accuracy $\\varepsilon \\ll n^{-1/3}$. In particular, this result disproves a conjecture in [@acharya2017unified] that the profile maximum likelihood (PML) plug-in approach is optimal in property estimation for all ranges of $\\varepsilon$, and confirms a conjecture in [@han2021competitive] that their competitive analysis of the PML is tight.\nauthor:\n- 'Yanjun Han[^1]'\nbibliography:\n- 'di.bib'\ntitle: On the High Accuracy Limitation of Adaptive Property Estimation\n---" +"---\nabstract: 'Dipolar-octupolar pyrochlore magnets in a strong external magnet field applied in the \\[110\\] direction are known to form a \u2018chain\u2019 state, with subextensive degeneracy. Magnetic moments are correlated along one-dimensional chains carrying effective Ising degrees of freedom which are noninteracting on the mean-field level. Here, we investigate this phenomenon in detail, including the effects of quantum fluctuations. We identify two distinct types of chain phases, both featuring distinct subextensive, classical ground state degeneracy. Focussing on one of the two kinds, we discuss lifting of the classical degeneracy by quantum fluctuations. We map out the ground-state phase diagram as a function of the exchange couplings, using linear spin wave theory and real-space perturbation theory. We find a hierarchy of energy scales in the ground state selection, with the effective dimensionality of the system varying in an intricate way as the hierarchy is descended. We derive an effective two-dimensional anisotropic triangular lattice Ising model with only three free parameters which accounts for the observed behavior. Connecting our results to experiment, they are consistent with the observation of a disordered chain state in Nd$_2$Zr$_2$O$_7$. We also show that the presence of two distinct types of chain phases has consequences for the" +"---\nabstract: 'Adversarial perturbation of images, in which a source image is deliberately modified with the intent of causing a classifier to misclassify the image, provides important insight into the robustness of image classifiers. In this work we develop two new methods for constructing adversarial perturbations, both of which are motivated by minimizing human ability to detect changes between the perturbed and source image. The first of these, the *Edge-Aware* method, reduces the magnitude of perturbations permitted in smooth regions of an image where changes are more easily detected. Our second method, the *Color-Aware* method, performs the perturbation in a color space which accurately captures human ability to distinguish differences in colors, thus reducing the perceived change. The Color-Aware and Edge-Aware methods can also be implemented simultaneously, resulting in image perturbations which account for both human color perception and sensitivity to changes in homogeneous regions. Because Edge-Aware and Color-Aware modifications exist for many image perturbations techniques, we also focus on computation to demonstrate their potential for use within more complex perturbation schemes. We empirically demonstrate that the Color-Aware and Edge-Aware perturbations we consider effectively cause misclassification, are less distinguishable to human perception, and are as easy to compute as the" +"---\nauthor:\n- 'Craig J. Copi'\n- Klaountia Pasmatsiou\n- 'Glenn D. Starkman'\nbibliography:\n- '2020\\_scalar\\_vector.bib'\ntitle: Scalar and vector tail radiation from the interior of the lightcone\n---\n\nIntroduction\n============\n\nSignals carried by massless particles are commonly believed to travel through our universe on the lightcone. This is approximately true of photons (massless vectors) and their associated electromagnetic waves, and gravitons (massless tensors) or at least the associated gravitational waves, and would be approximately true of massless scalars if they existed, because in 3+1-dimensional Minkowski space, the Greens functions for the massless scalar, vector, and tensor wave operators have support only on the null cone. Of course it is well known that the paths of those particles and their associated waves may be \u201cbent\u201d (relative to some hypothetical background homogeneous spacetime) by stress-energy-density inhomogeneities \u201cnearby\u201d the null geodesic between the time and place of their emission and the worldline of some observer. We call this gravitational lensing, and its observation with the correct magnitude was one of the early pieces of evidence supporting General Relativity (GR) [@Dyson:1920cwa]. There may even be multiple null geodesics connecting an emission event and the world-line of an observer \u2014 a phenomena known as" +"---\nabstract: |\n [Pollux]{} improves scheduling performance in deep learning (DL) clusters by adaptively co-optimizing inter-dependent factors both at the per-job level and at the cluster-wide level. Most existing schedulers expect users to specify the number of resources for each job, often leading to inefficient resource use. Some recent schedulers choose job resources for users, but do so without awareness of how DL training can be re-optimized to better utilize the provided resources.\n\n [Pollux]{} simultaneously considers both aspects. By monitoring the status of each job during training, [Pollux]{} models how their *goodput* (a metric we introduce to combine system throughput with statistical efficiency) would change by adding or removing resources. [Pollux]{} dynamically (re-)assigns resources to improve cluster-wide goodput, while respecting fairness and continually optimizing each DL job to better utilize those resources.\n\n In experiments with real DL jobs and with trace-driven simulations, [Pollux]{} reduces average job completion times by 37\u201350% relative to state-of-the-art DL schedulers, even when they are provided with ideal resource and training configurations for every job. [Pollux]{} promotes fairness among DL jobs competing for resources, based on a more meaningful measure of *useful* job progress, and reveals a new opportunity for reducing DL cost in cloud environments." +"---\nabstract: 'LMC X-1, a persistent, rapidly rotating, extra-galactic, black hole X-ray binary (BHXB) discovered in 1969, has always been observed in its high soft state. Unlike many other BHXBs, the black hole mass, source distance and binary orbital inclination are well established. In this work, we report the results of simultaneous broadband spectral studies of LMC X-1 carried out using the data from Soft X-ray Telescope and Large Area X-ray Proportional Counter aboard *AstroSat* as observed on 2016 November 26$^{th}$ and 2017 August 28$^{th}$. The combined spectrum was modelled with a multicolour blackbody emission (*diskbb*), a *Gaussian* along with a Comptonization component (*simpl*) in the energy range 0.7$-$30.0\u00a0keV. The spectral analysis revealed that the source was in its high soft state ($\\Gamma$\u00a0=\u00a02.67$^{+0.24}_{-0.24}$ and $\\Gamma$\u00a0=\u00a02.12$^{+0.19}_{-0.20}$) with a hot disc (kT$_{in}$\u00a0=\u00a00.86$^{+0.01}_{-0.01}$ and kT$_{in}$\u00a0=\u00a00.87$^{+0.02}_{-0.02}$). Thermal disc emission was fit with a relativistic model (*kerrbb*) and spin of the black hole was estimated to be 0.93$^{+0.01}_{-0.01}$ and 0.93$^{+0.04}_{-0.03}$ (statistical errors) for the two *Epochs* through X-ray continuum-fitting, which agrees with the previous results.'\nauthor:\n- |\n Sneha Prakash Mudambi,$^{1}$ A. Rao,$^{2}$ S. B. Gudennavar,$^{1}$[^1] R. Misra,$^{3}$ and S. G. Bubbly,$^{1}$\\\n \\\n $^{1}$Department of Physics" +"---\nauthor:\n- 'Francesco Marzari[^1]'\n- 'Gennaro D\u2019Angelo'\nbibliography:\n- 'biblio.bib'\ndate: 'Received ....; accepted ....'\ntitle: 'Dust distribution around low-mass planets on converging orbits'\n---\n\n[Super-Earths can form at large orbital radii and migrate inward due to tidal interactions with the circumstellar disk. In this scenario, convergent migration may occur and lead to the formation of resonant pairs of planets.]{} [We explore the conditions under which convergent migration and resonance capture take place, and what dynamical consequences can be expected on the dust distribution surrounding the resonant pair. ]{} [We combine hydrodynamic planet\u2013disk interaction models with dust evolution calculations to investigate the signatures produced in the dust distribution by a pair of planets in mean-motion resonances. ]{} [We find that convergent migration takes place when the outer planet is the more massive. However, convergent migration also depends on the local properties of the disk, and divergent migration may result as well. For similar disk parameters, the capture in low degree resonances (e.g., 2:1 or 3:2) is preferred close to the star where the resonance strength can more easily overcome the tidal torques exerted by the gaseous disk. Farther away from the star, convergent migration may result in capture in" +"---\nabstract: |\n An $(m, n)$-colored-mixed graph $G=(V, A_1, A_2,\\cdots, A_m, E_1, E_2,\\cdots, E_n)$ is a graph having $m$ colors of arcs and $n$ colors of edges. We do not allow two arcs or edges to have the same endpoints. A homomorphism from an $(m,n)$-colored-mixed graph $G$ to another $(m, n)$-colored-mixed graph $H$ is a morphism $\\varphi:V(G)\\rightarrow V(H)$ such that each edge (resp. arc) of $G$ is mapped to an edge (resp. arc) of $H$ of the same color (and orientation). An $(m,n)$-colored-mixed graph $T$ is said to be $P_g^{(m, n)}$-universal if every graph in $P_g^{(m, n)}$ (the planar $(m, n)$-colored-mixed graphs with girth at least $g$) admits a homomorphism to $T$.\n\n We show that planar $P_g^{(m, n)}$-universal graphs do not exist for $2m+n{\\geqslant}3$ (and any value of $g$) and find a minimal (in the number vertices) planar $P_g^{(m, n)}$-universal graphs in the other cases.\nauthor:\n- |\n Fabien Jacques and Pascal Ochem[^1]\\\n LIRMM, Universit\u00e9 de Montpellier, and CNRS. France\nbibliography:\n- 'bib.bib'\ntitle: 'Homomorphisms of planar $(m, n)$-colored-mixed graphs to planar targets'\n---\n\nIntroduction\n============\n\nThe concept of homomorphisms of $(m, n)$-colored-mixed graph was introduced by J. Nes\u011bt\u0159il and A. Raspaud\u00a0[@MNCM] in order to generalize homomorphisms of $k$-edge-colored" +"---\nabstract: |\n Neural networks are a prevalent and effective machine learning component, and their application is leading to significant scientific progress in many domains. As the field of neural network systems is fast growing, it is important to understand how advances are communicated. Diagrams are key to this, appearing in almost all papers describing novel systems. This paper reports on a study into the use of neural network system diagrams, through interviews, card sorting, and qualitative feedback structured around ecologically-derived examples. We find high diversity of usage, perception and preference in both creation and interpretation of diagrams, examining this in the context of existing design, information visualisation, and user experience guidelines.\n\n This interview study is used to derive a framework for improving existing diagrams. This framework is evaluated through a mixed-methods experimental study, and a \u201ccorpus-based\u201d approach examining properties of published diagrams linking the framework to citations. The studies suggest that the framework captures aspects relating to communicative efficacy of scholarly NN diagrams, and provides simple steps for their implementation.\naddress:\n- 'Department of Computer Science, University of Manchester, UK'\n- 'IDIAP Research Institute, Martigny, Switzerland'\nauthor:\n- Guy Clarke Marshall\n- Andr\u00e9 Freitas\n- Caroline Jay\nbibliography:\n-" +"---\nabstract: 'Based on the realistic nuclear force of the high-precision CD-Bonn potential, we have performed comprehensive calculations for neutron-rich calcium isotopes using the Gamow shell model (GSM) which includes resonance and continuum. The realistic GSM calculations produce well binding energies, one- and two-neutron separation energies, predicting that $^{57}$Ca is the heaviest bound odd isotope and $^{70}$Ca is the dripline nucleus. Resonant states are predicted, which provides useful information for future experiments on particle emissions in neutron-rich calcium isotopes. Shell evolutions in the calcium chain around neutron numbers *N* = 32, 34 and 40 are understood by calculating effective single-particle energies, the excitation energies of the first $2^+$ states and two-neutron separation energies. The calculations support shell closures at $^{52}$Ca (*N* = 32) and $^{54}$Ca (*N* = 34) but show a weakening of shell closure at $^{60}$Ca (*N* = 40). The possible shell closure at $^{70}$Ca (*N* = 50) is predicted.'\nauthor:\n- 'J.G. Li, B.S. Hu, Q. Wu, Y. Gao, S.J. Dai, and F.R. Xu'\nbibliography:\n- 'Ca\\_revision.bib'\ntitle: 'Neutron-rich calcium isotopes within realistic Gamow shell model calculations with continuum coupling'\n---\n\nIntroduction\n============\n\nThe long chain of calcium isotopes provides an ideal laboratory for both theoretical and experimental" +"---\nabstract: '3D face reconstruction is a fundamental task that can facilitate numerous applications such as robust facial analysis and augmented reality. It is also a challenging task due to the lack of high-quality datasets that can fuel current deep learning-based methods. However, existing datasets are limited in quantity, realisticity and diversity. To circumvent these hurdles, we introduce **[Pixel-Face]{}**, a large-scale, high-resolution and diverse 3D face dataset with massive annotations. Specifically, [Pixel-Face]{}\u00a0contains 855 subjects aging from 18 to 80. Each subject has more than 20 samples with various expressions. Each sample is composed of high-resolution multi-view RGB images and 3D meshes with various expressions. Moreover, we collect precise landmarks annotation and 3D registration result for each data. To demonstrate the advantages of [Pixel-Face]{}, we re-parameterize the 3D Morphable Model (3DMM) into **[Pixel-3DM]{}**\u00a0using the collected data. We show that the obtained [Pixel-3DM]{}\u00a0is better in modeling a wide range of face shapes and expressions. We also carefully benchmark existing 3D face reconstruction methods on our dataset. Moreover, [Pixel-Face]{}\u00a0serves as an effective training source. We observe that the performance of current face reconstruction models significantly improves both on existing benchmarks and [Pixel-Face]{}\u00a0after being fine-tuned using our newly collected" +"---\nabstract: 'Results are presented for the dynamics of an almost strong edge mode which is the quasi-stable Majorana edge mode occurring in non-integrable spin chains. The dynamics of the edge mode is studied using exact diagonalization, and compared with time-evolution with respect to an effective semi-infinite model in Krylov space obtained from the recursion method. The effective Krylov Hamiltonian is found to resemble a spatially inhomogeneous SSH model where the hopping amplitude increases linearly with distance into the bulk, typical of thermalizing systems, but also has a staggered or dimerized structure superimposed on it. The non-perturbatively long lifetime of the edge mode is shown to be due to this staggered structure which diminishes the effectiveness of the linearly growing hopping amplitude. On taking the continuum limit of the Krylov Hamiltonian, the edge mode is found to be equivalent to the quasi-stable mode of a Dirac Hamiltonian on a half line, with a mass which is non-zero over a finite distance, before terminating into a gapless metallic bulk. The analytic estimates are found to be in good agreement with the numerically obtained lifetimes of the edge mode.'\nauthor:\n- 'Daniel J. Yates$^{1}$'\n- 'Alexander G. Abanov$^{2,3}$'\n- 'Aditi Mitra$^{1}$'\ntitle:" +"---\nabstract: 'The model of continuous spontaneous localization (CSL) is the most prominent consistent modification of quantum mechanics predicting an objective quantum-to-classical transition. Here we show that precision interferometry with Bose-Einstein condensed atoms can serve to lower the current empirical bound on the localization rate parameter by several orders of magnitude. This works by focusing on the atom count distributions rather than just mean population imbalances in the interferometric signal of squeezed BECs, without the need for highly entangled GHZ-like states. In fact, the interplay between CSL-induced diffusion and dispersive atom-atom interactions results in an amplified sensitivity of the condensate to CSL. We discuss experimentally realistic measurement schemes utilizing state-of-the-art experimental techniques to test new regions of parameter space and, pushed to the limit, to probe and potentially rule out large relevant parameter regimes of CSL.'\nauthor:\n- Bj\u00f6rn Schrinski\n- Philipp Haslinger\n- J\u00f6rg Schmiedmayer\n- Klaus Hornberger\n- Stefan Nimmrichter\ntitle: 'Testing collapse models with Bose-Einstein-Condensate interferometry'\n---\n\nIntroduction {#Introduction}\n============\n\nPostulating an objective, spontaneous collapse process for the wave function is a way to overcome the quantum measurement problem and to explain the fundamental absence of spatial superpositions on the macroscopic scale [@bassi2013models]. This idea deems quantum" +"---\nabstract: 'Ground based long-range passive imaging systems often suffer from degraded image quality due to a turbulent atmosphere. While methods exist for removing such turbulent distortions, many are limited to static sequences which cannot be extended to dynamic scenes. In addition, the physics of the turbulence is often not integrated into the image reconstruction algorithms, making the physics foundations of the methods weak. In this paper, we present a unified method for atmospheric turbulence mitigation in both static and dynamic sequences. We are able to achieve better results compared to existing methods by utilizing (i) a novel space-time non-local averaging method to construct a reliable reference frame, (ii) a geometric consistency and a sharpness metric to generate the lucky frame, (iii) a physics-constrained prior model of the point spread function for blind deconvolution. Experimental results based on synthetic and real long-range turbulence sequences validate the performance of the proposed method.'\nauthor:\n- 'Zhiyuan\u00a0Mao,\u00a0 Nicholas\u00a0Chimitt,\u00a0 and\u00a0Stanley\u00a0H.\u00a0Chan,\u00a0[^1]'\nbibliography:\n- 'egbibnew.bib'\ntitle: Image Reconstruction of Static and Dynamic Scenes through Anisoplanatic Turbulence\n---\n\nAtmospheric turbulence, reference frame, lucky region, blind deconvolution\n\nIntroduction\n============\n\nGround-based long-range passive imaging systems often suffer from degraded image quality due to" +"---\nabstract: 'Using a distributed representation formula of the Gateaux derivative of the Dirichlet to Neumann map with respect to movements of a polygonal conductivity inclusion, [@BMPS], we extend the results obtained in [@BF] proving global Lipschitz stability for the determination of a polygonal conductivity inclusion embedded in a layered medium from knowledge of the Dirichlet to Neumann map.'\naddress:\n- 'Dipartimento di Matematica \u201cBrioschi\u201d, Politecnico di Milano and New York University Abu Dhabi'\n- 'Dipartimento di Matematica e Informatica \u201cU. Dini\u201d, Universit\u00e0 di Firenze'\n- 'Dipartimento di Matematica e Informatica \u201cU. Dini\u201d, Universit\u00e0 di Firenze'\nauthor:\n- Elena\u00a0Beretta\n- Elisa\u00a0Francini\n- Sergio\u00a0Vessella\ntitle: Lipschitz stable determination of polygonal conductivity inclusions in a layered medium from the Dirichlet to Neumann map \n---\n\nIntroduction\n============\n\nIn this paper we consider the inverse problem of determining a polygonal conductivity inclusion in a layered medium. We address the issue of stable reconstruction from knowledge of the Dirichlet-to Neumann map proving a quantitative Lipschitz stability estimate. This extends the results obtained in [@BF] where Lipschitz stability was proved in the case of one or more well separated polygonal inclusions embedded in a homogeneous medium. There, a crucial step to prove Lipschitz" +"---\nabstract: 'Floquet-Magnus (FM) expansion theory is a powerful tool in periodically driven (Floquet) systems under high-frequency drives. In closed systems, it dictates that their stroboscopic dynamics under a time-periodic Hamiltonian is well captured by the FM expansion, which gives a static effective Hamiltonian. On the other hand, in dissipative systems driven by a time-periodic Liouvillian, it remains an important and nontrivial problem whether the FM expansion gives a static Liouvillian describing continuous-time Markovian dynamics, which we refer to as the Liouvillianity of the FM expansion. We answer this question for generic systems with local interactions. We find that, while noninteracting systems can either break or preserve Liouvillianity of the FM expansion, generic few-body and many-body interacting systems break it under any finite drive, which is essentially caused by propagation of interactions via higher order terms of the FM expansion. Liouvillianity breaking implies that Markovian dissipative Floquet systems in the high-frequency regimes do not have static (Markovian) counterparts, giving a signature of emergent non-Markovianity. Our theory provides a useful insight for questing unique phenomena in dissipative Floquet systems.'\nauthor:\n- Kaoru Mizuta\n- Kazuaki Takasan\n- Norio Kawakami\ntitle: |\n Breakdown of Markovianity by interactions\\\n in stroboscopic Floquet-Lindblad dynamics under" +"---\nabstract: 'This paper analyzes some speed and performance improvement methods of Transformer architecture in recent years, mainly its application in dedicated model training. The dedicated model studied here refers to the open domain persona-aware dialogue generation model, and the dataset is multi turn short dialogue, The total length of a single input sequence is no more than 105 tokens. Therefore, many improvements in the architecture and attention mechanism of transformer architecture for long sequence processing are not discussed in this paper. The source code of the experiments has been open sourced[^1].'\nauthor:\n- |\n Qiang Han\\\n [{gvvvv}@163.com]{}\\\ntitle: 'Improvement of a dedicated model for open domain persona-aware dialogue generation'\n---\n\nBackground\n==========\n\nThe revolution in the field of NLP (Natural Language Processing) started with the foundation of attention mechanism, \u00a0Transformer architecture ignited the fuse, and officially opened the revolution curtain by the BERT model. It replaced the relatively complex RNN Architecture series models that used to be the mainstream of NLP with simple pure attention mechanism architecture in the past four years, it has changed the field of NLP in an all-round way, and created the Imagenet moment in NLP field.\n\nOpen domain dialogue generation is an important and" +"---\nabstract: |\n The recently constructed theory of radio wave propagation in the pulsar magnetosphere outlines the general aspects of the radio light curve and polarization formation. It allows us to describe general properties of mean profiles, such as the position angle ($PA$) of the linear polarization, and the circular polarization for the realistic structure of the pair creation region in the pulsar magnetosphere. In this work, we present an application of the radio wave propagation theory to the radio observations of pulsar PSR J1906+0746. This pulsar is particularly interesting because observations of relativistic spin-precession in a binary system allows us to put strong constraints on its geometry. Because it is an almost orthogonal rotator, the pulsar allows us to observe both magnetic poles; as we show, this is crucial for testing the theory of radio wave propagation and obtaining constraints on the parameters of magnetospheric plasma. Our results show that plasma parameters are qualitatively consistent with theories of pair plasma production in polar cap discharges. Specifically, for PSR J1906+0746, we constrain the plasma multiplicity $\\lambda \\sim 10^3$ and the Lorentz-factor of secondary plasma $\\gamma \\sim $ a few hundred.\\\nauthor:\n- |\n A.K. Galishnikova,$^{1,2}$[^1] A.A. Philippov$^{3,2}$ and V.S. Beskin$^{2,4}$\\" +"---\nabstract: |\n Budgeted uncertainty sets have been established as a major influence on uncertainty modeling for robust optimization problems. A drawback of such sets is that the budget constraint only restricts the global amount of cost increase that can be distributed by an adversary. Local restrictions, while being important for many applications, cannot be modeled this way.\n\n We introduce new variant of budgeted uncertainty sets, called locally budgeted uncertainty. In this setting, the uncertain parameters become partitioned, such that a classic budgeted uncertainty set applies to each partition, called region.\n\n In a theoretical analysis, we show that the robust counterpart of such problems for a constant number of regions remains solvable in polynomial time, if the underlying nominal problem can be solved in polynomial time as well. If the number of regions is unbounded, we show that the robust selection problem remains solvable in polynomial time, while also providing hardness results for other combinatorial problems.\n\n In computational experiments using both random and real-world data, we show that using locally budgeted uncertainty sets can have considerable advantages over classic budgeted uncertainty sets.\nauthor:\n- 'Marc Goerigk[^1]'\n- 'Stefan Lendl[^2]'\ntitle: Robust Combinatorial Optimization with Locally Budgeted Uncertainty\n---\n\n**Keywords:** robust" +"---\nabstract: 'Identifying, measuring and reporting lesions accurately and comprehensively from patient scans are important yet time-consuming procedures for physicians. Computer-aided lesion/significant-findings detection techniques are at the core of medical imaging, which remain very challenging due to the tremendously large variability of lesion appearance, location and size distributions in 3D imaging. In this work, we propose a novel deep anchor-free one-stage framework that incorporates (1) operators to recycle the architectural configurations and pre-trained weights from the off-the-shelf 2D networks, especially ones with large capacities to cope with data variance, and (2) a new method to effectively regress the 3D lesion spatial extents by pinpointing their representative key points on lesion surfaces. Experimental validations are first conducted on the public large-scale NIH DeepLesion dataset where our proposed method delivers new state-of-the-art quantitative performance. We also test on our in-house dataset for liver tumor detection. generalizes well in both large-scale and small-sized tumor datasets in CT imaging.'\nauthor:\n- Jinzheng Cai\n- Ke Yan\n- 'Chi-Tung Cheng'\n- Jing Xiao\n- 'Chien-Hung Liao'\n- |\n \\\n Le Lu\n- 'Adam P. Harrison'\nbibliography:\n- 'paii.bib'\ntitle: 'Deep Volumetric Universal Lesion Detection using Light-Weight Pseudo 3D Convolution and Surface Point Regression'\n---\n\nIntroduction" +"---\nabstract: 'This paper obtains shape-related parameters and functions of a Power Module ferrite core for a design-oriented inductor model, which is a fundamental tool to design any electronic power converter and its control policy. To improve accuracy, some particular modifications have been introduced into the standardized method of obtaining characteristics core areas and lengths. Also, a novel approach is taken to obtain the air gap reluctance as a function of air gap length for that specific core shape.'\nauthor:\n- \n- \ntitle: 'Power Module (PM) core-specific parameters for a detailed design-oriented inductor model'\n---\n\npower module ferrite core, ungapped core model, air gap reluctance model, air gap length computation, coil former\n\nIntroduction\n============\n\nrrite-core based inductors are commonly found in the LC output filter of voltage source inverters (VSI) [@TradeoffStudyHeatSinkOutputFilterVolumeGaNHEMTBasedSingle-PhaseInverter:Castellazzi], [@HybridActivePowerFilterGaNPowerStage5kWSinglePhaseInverter:Manchia], as energy storage devices in DC/DC converters [@200COperationDC-DCConverterSiCPowerDevices:Jordan], [@99EfficientThree-phaseBuck-typeSiCMOSFETPFCRectifierMinimizingLifeCycleCostDCdataCenters:Kolar] and as line-input filters in PFC converters [@High-VoltageSiC-BasedBoostPFCLEDApplications:Salamero], [@DesignLossAnalysisHighFrequencyPFCconverter:Song], among many other power conversion applications. Due to the ferrite material properties, these inductors have to deal with relatively high-frequency currents, sometimes being superimposed on relatively large-amplitude low-frequency currents. It is of paramount importance to design these inductors in a way that a minimum inductance value is always ensured which" +"---\nabstract: 'Traditionally, time-development of the mean square displacement has been employed to determine the diffusion coefficient from the trajectories of single particles. However, this approach is sensitive to the noise and the motion blur upon image acquisition. Recently, Vestergaard et al. has proposed a novel method based on the covariance between the shifted displacement series. This approach gives a more robust estimator of the diffusion coefficient of one-dimensional diffusion without bias, i.e., when mean velocity is zero. Here, we extend this approach to a potentially biased random walk on a two-dimensional lattice. First, we describe the relationship between the hopping rates to the eight adjacent sites and the time development of the higher-order moments of the stochastic two-dimensional displacements. Then, we derive the covariance-based estimators for these higher-order moments. Numerical simulations confirmed that the procedure presented here allows inference of the stochastic hopping rates from two-dimensional trajectory data with location error and motion blur.'\nauthor:\n- Masanori Mishima\nbibliography:\n- '2dcve.bib'\ntitle: 'Inference of hopping rates of anisotropic random walk on a 2D lattice via covariance-based estimators of diffusion parameters'\n---\n\n\\[sec:level1\\]Introduction\n==========================\n\nThe forward and backward hopping rates of a random walk on a one-dimensional lattice are linked" +"---\nabstract: 'The increasing computational demand of Deep Learning has propelled research in special-purpose inference accelerators based on emerging non-volatile memory (NVM) technologies. Such NVM crossbars promise fast and energy-efficient in-situ Matrix Vector Multiplication (MVM) thus alleviating the long-standing von Neuman bottleneck in today\u2019s digital hardware. However, the analog nature of computing in these crossbars is inherently approximate and results in deviations from ideal output values, which reduces the overall performance of Deep Neural Networks (DNNs) under normal circumstances. In this paper, we study the impact of these non-idealities under adversarial circumstances. We show that the non-ideal behavior of analog computing lowers the effectiveness of adversarial attacks, in both Black-Box and White-Box attack scenarios. In a non-adaptive attack, where the attacker is unaware of the analog hardware, we observe that analog computing offers a varying degree of intrinsic robustness, with a peak adversarial accuracy improvement of 35.34%, 22.69%, and 9.90% for white box PGD ($\\epsilon$=1/255, iter=30) for CIFAR-10, CIFAR-100, and ImageNet respectively. We also demonstrate \u201cHardware-in-Loop\" adaptive attacks that circumvent this robustness by utilizing the knowledge of the NVM model.'\nauthor:\n- \nbibliography:\n- 'references.bib'\ntitle: On the Intrinsic Robustness of NVM Crossbars Against Adversarial Attacks\n---\n\nIntroduction\n============\n\nDeep" +"---\nabstract: 'We consider the *graph coloring game*, a game in which two players take turns properly coloring the vertices of a graph, with one player attempting to complete a proper coloring, and the other player attempting to prevent a proper coloring. We show that if a graph $G$ has a proper coloring in which the *game coloring number* of each bicolored subgraph is bounded, then the *game chromatic number* of $G$ is bounded. As a corollary to this result, we show that for two graphs $G_1$ and $G_2$ with bounded game coloring number, the Cartesian product $G_1 \\square G_2$ has bounded game chromatic number, answering a question of X. Zhu. We also obtain an upper bound on the game chromatic number of the strong product $G_1 \\boxtimes G_2$ of two graphs.'\naddress: 'Department of Mathematics, Simon Fraser University, Burnaby, BC, Canada'\nauthor:\n- Peter Bradshaw\nbibliography:\n- 'gameColoring.bib'\ntitle: 'Graph colorings with restricted bicolored subgraphs: II. The graph coloring game'\n---\n\nIntroduction\n============\n\nThe *graph coloring game* is a game played on a finite graph $G$ with perfect information by two players, Alice and Bob. In the graph coloring game, Alice moves first, and Alice and Bob take turns" +"---\nabstract: 'The cluster multipole (CMP) expansion for magnetic structures provides a scheme to systematically generate candidate magnetic structures specifically including noncollinear magnetic configurations adapted to the crystal symmetry of a given material. A comparison with the experimental data collected on MAGNDATA shows that the most stable magnetic configurations in nature are linear combinations of only few CMPs. Furthermore, a high-throughput calculation for all candidate magnetic structures is performed in the framework of spin-density functional theory (SDFT). We benchmark the predictive power of CMP+SDFT with ${2935}$ calculations, which show that (i) the CMP expansion administers an exhaustive list of candidate magnetic structures, (ii) CMP+SDFT can narrow down the possible magnetic configurations to a handful of computed configurations, and (iii) SDFT reproduces the experimental magnetic configurations with an accuracy of $\\pm0.5{\\,\\mu_\\text{B}}$. For a subset the impact of on-site Coulomb repulsion $U$ is investigated by means of 1545 CMP+SDFT+U calculations revealing no further improvement on the predictive power.'\nauthor:\n- 'M.-T. Huebsch$^{1,2}$, T. Nomoto$^2$, M.-T. Suzuki$^{3,4}$ and R. Arita$^{1,2}$'\nbibliography:\n- 'benchlib.bib'\ntitle: 'Benchmark for *ab initio* prediction of magnetic\u00a0structures based on cluster\u00a0multipole\u00a0theory'\n---\n\nIntroduction {#sec:Introduction}\n============\n\nThe grand challenge in first-principles calculation for magnetic materials is whether we" +"---\nabstract: 'We study the asymptotic properties of kinks in connection with the deformation procedure. We show that, upon deformation of the field-theoretic model, the asymptotics of kinks can change or remain unchanged, depending on the properties of the deforming function. The cases of both explicit and implicit kinks are considered. In addition, we show that the the deformation procedure can be applied to the important case of implicit kinks. We also prove that for any kink with a power-law tail, the stability potential decreases as the inverse square of the coordinate. The physical consequences of the deformation are discussed: the change of the kink mass, as well as the asymptotic behavior of the kink-antikink force.'\nauthor:\n- 'Petr A. Blinov'\n- 'Tatiana V. Gani'\n- 'Vakhid A. Gani'\ntitle: Deformations of kink tails\n---\n\nIntroduction {#sec:Introduction}\n============\n\nKink solutions (kinks) are topologically nontrivial solutions of $(1+1)$-dimensional field-theoretic models with a real scalar field, the dynamics of which is given by the self-interaction (potential), leading to a nonlinear evolution equation [@Shnir.book.2018; @Vachaspati.book.2006; @Manton.book.2004; @Vilenkin.book.2000; @Rajaraman.book.1982]. Models that admit kink solutions have an extremely wide application in physics and arise in the description of various processes and objects. A classical example" +"---\nabstract: 'We demonstrate the creation of a spin-1/2 state *via* the atomically controlled generation of magnetic carbon radical ions (CRIs) in synthetic two-dimensional transition metal dichalcogenides (TMDs). Hydrogenated carbon impurities located at chalcogen sites introduced by chemical doping can be activated with atomic precision by hydrogen depassivation using a scanning probe tip. In its anionic state, the carbon impurity exhibits a magnetic moment of [$1\\,\\text{$\\mu_\\text{B}$}$]{} resulting from an unpaired electron populating a spin-polarized in-gap orbital of C$^{\\bullet -}_\\text{S}$. Fermi level control by the underlying graphene substrate can charge and decharge the defect, thereby activating or quenching the defect magnetic moment. By inelastic tunneling spectroscopy and density functional theory calculations we show that the CRI defect states couple to a small number of vibrational modes, including a local, breathing-type mode. Interestingly, the electron-phonon coupling strength critically depends on the spin state and differs for monolayer and bilayer . These carbon radical ions in TMDs comprise a new class of surface-bound, single-atom spin-qubits that can be selectively introduced, are spatially precise, feature a well-understood vibronic spectrum, and are charge state controlled.'\nauthor:\n- 'Katherine A. Cochrane'\n- 'Jun-Ho Lee$^*$'\n- Christoph Kastl\n- 'Jonah B. Haber'\n- Tianyi Zhang\n- Azimkhan" +"---\nabstract: 'Many municipalities and road authorities seek to implement automated evaluation of road damage. However, they often lack technology, know-how, and funds to afford state-of-the-art equipment for data collection and analysis of road damages. Although some countries, like Japan, have developed less expensive and readily available Smartphone-based methods for automatic road condition monitoring, other countries still struggle to find efficient solutions. This work makes the following contributions in this context. Firstly, it assesses usability of the Japanese model for other countries. Secondly, it proposes a large-scale heterogeneous road damage dataset comprising 26620 images collected from multiple countries using smartphones. Thirdly, we propose generalized models capable of detecting and classifying road damages in more than one country. Lastly, we provide recommendations for readers, local agencies, and municipalities of other countries when one other country publishes its data and model for automatic road damage detection and classification. Our dataset is available at (https://github.com/sekilab/RoadDamageDetector/).'\nauthor:\n- 'Deeksha Arya[^1]'\n- Hiroya Maeda\n- Sanjay Kumar Ghosh\n- Durga Toshniwal\n- Alexander Mraz\n- Takehiro Kashiyama\n- 'Yoshihide Sekimoto[^2]'\nbibliography:\n- 'Bibliography-APA.bib'\ntitle: '**Transfer Learning-based Road Damage Detection for Multiple Countries**'\n---\n\nIntroduction {#Introduction}\n============\n\nRoad infrastructure is a crucial public asset as it" +"---\nabstract: 'Online abuse directed towards women on the social media platform Twitter has attracted considerable attention in recent years. An automated method to effectively identify misogynistic abuse could improve our understanding of the patterns, driving factors, and effectiveness of responses associated with abusive tweets over a sustained time period. However, training a neural network (NN) model with a small set of labelled data to detect misogynistic tweets is difficult. This is partly due to the complex nature of tweets which contain misogynistic content, and the vast number of parameters needed to be learned in a NN model. We have conducted a series of experiments to investigate how to train a NN model to detect misogynistic tweets effectively. In particular, we have customised and regularised a Convolutional Neural Network (CNN) architecture and shown that the word vectors pre-trained on a task-specific domain can be used to train a CNN model effectively when a small set of labelled data is available. A CNN model trained in this way yields an improved accuracy over the state-of-the-art models.'\nauthor:\n- Md Abul Bashar\n- Richi Nayak\n- Nicolas Suzor\n- Bridget Weir\nbibliography:\n- 'References.bib'\ntitle: 'Misogynistic Tweet Detection: Modelling CNN with Small" +"---\nabstract: 'A convex geometry is a closure system satisfying the anti-exchange property. In this work we document all convex geometries on 4- and 5-element base sets with respect to their representation by circles on the plane. All 34 non-isomorphic geometries on a 4-element set can be represented by circles, and of 672 known geometries on a 5-element set, we made representations of 623. Of the 49 remaining geometries on a 5-element set, one was already shown not to be representable due to the Weak Carousel property, as articulated by Adaricheva and Bolat (Discrete Mathematics, 2019). In this paper we show that 7 more of these convex geometries cannot be represented by circles on the plane, due to what we term the *Triangle Property*.'\nauthor:\n- 'Polly Mathews Jr.[^1]'\ntitle: Convex geometries representable by at most 5 circles on the plane\n---\n\nIntroduction\n============\n\nIn this project we address the problem raised in Adaricheva and Bolat [@AdBo19]: whether all geometries with convex dimension at most 5 are representable by circles on the plane using the closure operator of convex hull for circles.\n\nAn early survey on the topic of convex geometries is given by Edelman and Jamison [@EdJa85], and the" +"---\nabstract: |\n > This paper explores hierarchical clustering in the case where pairs of points have dissimilarity scores (e.g. distances) as a part of the input. The recently introduced objective for points with dissimilarity scores results in *every tree* being a $\\frac{1}{2}$ approximation if the distances form a metric. This shows the objective does not make a significant distinction between a good and poor hierarchical clustering in metric spaces.\n >\n > Motivated by this, the paper develops a new global objective for hierarchical clustering in Euclidean space. The objective captures the criterion that has motivated the use of divisive clustering algorithms: that when a split happens, points in the same cluster should be more similar than points in different clusters. Moreover, this objective gives reasonable results on ground-truth inputs for hierarchical clustering.\n >\n > The paper builds a theoretical connection between this objective and the bisecting $k$-means algorithm. This paper proves that the optimal $2$-means solution results in a constant approximation for the objective. This is the first paper to show the bisecting $k$-means algorithm optimizes a natural global objective over the entire tree.\nauthor:\n- Yuyan Wang^1^\n- |\n Benjamin Moseley^1^\\\n ^1^ Tepper School of Business, Carnegie Mellon" +"---\nabstract: 'Lee (2009) is a common approach to bound the average causal effect in the presence of selection bias, assuming the treatment effect on selection has the same sign for all subjects. This paper generalizes Lee bounds to allow the sign of this effect to be identified by pretreatment covariates, relaxing the standard (unconditional) monotonicity to its conditional analog. Asymptotic theory for generalized Lee bounds is proposed in low-dimensional smooth and high-dimensional sparse designs. The paper also generalizes Lee bounds to accommodate multiple outcomes. It characterizes the sharp identified set for the causal parameter and proposes uniform Gaussian inference on the support function. The estimated bounds achieve nearly point-identification in JobCorps job training program (Lee (2009)), where unconditional monotonicity is unlikely to hold.'\nauthor:\n- 'Vira Semenova[^1]'\nbibliography:\n- 'my\\_new\\_bibtex.bib'\ndate: 'February 24, 2023'\ntitle: Generalized Lee Bounds\n---\n\nIntroduction\n============\n\nRandomized controlled trials are often complicated by endogenous sample selection and non-response. This problem occurs when treatment affects the researcher\u2019s ability to observe an outcome (a selection effect) in addition to the outcome itself (the causal effect of interest). For example, being randomized into a job training program affects both an individual\u2019s wage and employment status. Since wages" +"---\nabstract: 'Understanding which features humans rely on \u2013 in visually recognizing action similarity is a crucial step towards a clearer picture of human action perception from a learning and developmental perspective. In the present work, we investigate to which extent a computational model based on kinematics can determine action similarity and how its performance relates to human similarity judgments of the same actions. To this aim, twelve participants perform an action similarity task, and their performances are compared to that of a computational model solving the same task. The chosen model has its roots in developmental robotics and performs action classification based on learned kinematic primitives. The comparative experiment results show that both the model and human participants can reliably identify whether two actions are the same or not. However, the model produces more false hits and has a greater selection bias than human participants. A possible reason for this is the particular sensitivity of the model towards kinematic primitives of the presented actions. In a second experiment, human participants\u2019 performance on an action identification task indicated that they relied solely on kinematic information rather than on action semantics. The results show that both the model and human performance" +"---\nabstract: 'We present a novel particle management method using the Characteristic Mapping framework. In the context of explicit evolution of parametrized curves and surfaces, the surface distribution of marker points created from sampling the parametric space is controlled by the area element of the parametrization function. As the surface evolves, the area element becomes uneven and the sampling, suboptimal. In this method we maintain the quality of the sampling by pre-composition of the parametrization with a deformation map of the parametric space. This deformation is generated by the velocity field associated to the diffusion process on the space of probability distributions and induces a uniform redistribution of the marker points. We also exploit the semigroup property of the heat equation to generate a submap decomposition of the deformation map which provides an efficient way of maintaining evenly distributed marker points on curves and surfaces undergoing extensive deformations.'\nauthor:\n- 'Xi-Yuan Yin[^1]'\n- Linan Chen\n- 'Jean-Christophe Nave[^2]'\nbibliography:\n- 'densityTransport.bib'\ntitle: 'A Diffusion-Driven Characteristic Mapping Method for Particle Management'\n---\n\nParticle management, Equiareal parametrization, Characteristic Mapping method, Heat equation\n\nIntroduction\n============\n\nThe parametrization of a curve or surface has many applications in computer graphics, computational geometry and geometric modelling." +"---\nabstract: 'The power of a quantum circuit is determined through the number of two-qubit entangling gates that can be performed within the coherence time of the system. In the absence of parallel quantum gate operations, this would make the quantum simulators limited to shallow circuits. Here, we propose a protocol to parallelize the implementation of two-qubit entangling gates between multiple users which are spatially separated and use a commonly shared spin chain data-bus. Our protocol works through inducing effective interaction between each pair of qubits without disturbing the others, therefore, it increases the rate of gate operations without creating crosstalk. This is achieved by tuning the Hamiltonian parameters appropriately, described in the form of two different strategies. The tuning of the parameters, makes different bilocalized eigenstates responsible for the realization of the entangling gates between different pairs of distant qubits. Remarkably, the performance of our protocol is robust against increasing the length of the data-bus and the number of users. Moreover, we show that this protocol can tolerate various types of disorders and is applicable in the context of superconductor-based systems. The proposed protocol can serve for realizing two-way quantum communication.'\nauthor:\n- Rozhin Yousefjani\n- Abolfazl Bayat\ntitle:" +"---\nauthor:\n- 'Luciano M. Abreu'\n- 'Felipe J. Llanes-Estrada'\ndate: April 19th 2020\ntitle: Heating triangle singularities in heavy ion collisions\n---\n\nIntroduction {#Introduction}\n============\n\nAt the foundation of particle physics since the 1960s is the understanding of hadrons in quark-model terms. It is thus surprising that there are so many \u201cstructures\u201d in accelerator data that remain unclassified. While there are too few baryons ($qqq$-like) in comparison to early model expectations, there are numerous claims for supernumerary meson ($q\\overline{q}$) resonances. Perhaps this is the plethora of exotic resonances expected from Quantum Chromodynamics, that elevated the quark model to a field theory with sectors counting different numbers of quarks, antiquarks and gluons. But some of those new \u201chadrons\u201d without a clear overall pattern also beg for dynamical explanations based on how the known hadrons rescatter under their strong force. A leading candidate hypothesis to effect much of the probably needed cleanup is the concept of triangle singularities (and other cuspy features), much discussed in hadron physics in the last decade\u00a0[@Bugg:2011jr; @Wang:2013hga; @Szczepaniak:2015eza; @Guo:2019twa]. Such methods are becoming standard among experimental collaborations, reexamining new and earlier \u201cresonance\u201d discoveries for singularity structures not necessarily reflecting a new particle. Serve as example" +"---\nauthor:\n- 'Jinmian Li,'\n- 'Tianjun Li,'\n- 'and Fang-Zhou Xu'\nbibliography:\n- 'ref.bib'\ntitle: Reconstructing boosted Higgs jets from event image segmentation\n---\n\n[ !a! @toks= @toks= ]{} [ !b! @toks= @toks= ]{} [ !c! @toks= @toks= ]{} [ !d! @toks= @toks= ]{}\n\n[@counter>0@toks=@toks=]{} [@counter>0@toks=@toks=]{} [@counter>0@toks=@toks=]{}\n\nabstract\n\nBased on the jet image approach, which treats the energy deposition in each calorimeter cell as the pixel intensity, the Convolutional neural network (CNN) method has been found to achieve a sizable improvement in jet tagging compared to the traditional jet substructure analysis. In this work, the Mask R-CNN framework is adopted to reconstruct Higgs jets in collider-like events, with the effects of pileup contamination taken into account. This automatic jet reconstruction method achieves higher efficiency of Higgs jet detection and higher accuracy of Higgs boson four-momentum reconstruction than traditional jet clustering and jet substructure tagging methods. Moreover, the Mask R-CNN trained on events containing a single Higgs jet is capable of detecting one or more Higgs jets in events of several different processes, without apparent degradation in reconstruction efficiency and accuracy. The outputs of the network also serve as new handles for the $t\\bar{t}$ background suppression, complementing to traditional jet" +"---\nabstract: 'We present constraints on the physical properties (including stellar mass, age, and star formation rate) of 207 galaxy candidates from the Reionization Lensing Cluster Survey (RELICS) and *Spitzer*-RELICS surveys. We measure photometry using T-PHOT and perform spectral energy distribution fitting using EA$z$Y and BAGPIPES. Of the 207 candidates for which we could successfully measure (or place limits on) *Spitzer* fluxes, 23 were demoted to likely $z<4$. Among the high-$z$ candidates, we find intrinsic stellar masses between $1\\times10^6\\rm{M_{\\odot}}$ and $4\\times10^9\\rm{M_\\odot}$, and rest-frame UV absolute magnitudes between $-22.6$ and $-14.5$ mag. While our sample is mostly comprised of $L_{UV}/L^*_{UV}<1$ galaxies, it extends to $L_{UV}/L^*_{UV}\\sim2$. Our sample spans $\\sim4$ orders of magnitude in stellar mass and star formation rates, and exhibits ages that range from maximally young to maximally old. We highlight 11 $z\\geq6.5$ galaxies with detections in *Spitzer*/IRAC imaging, several of which show evidence for some combination of evolved stellar populations, large contributions of nebular emission lines, and/or dust. Among these is PLCKG287+32-2013, one of the brightest $z\\sim7$ candidates known (AB mag 24.9 at 1.6$\\mu$m) with a *Spitzer* 3.6$\\mu$m flux excess suggesting strong \\[OIII\\] + H-$\\beta$ emission ($\\sim$1000\u00c5\u00a0rest-frame equivalent width). We discuss the possible uses and limits of our" +"---\nabstract: 'Recently, the use of [H2+]{}ions instead of protons to overcome space charge challenges in compact cyclotrons has received much attention. This technique has the potential to increase the available beam current from compact cyclotrons by an order of magnitude, paving the way for applications in energy research, medical isotope production, and particle physics, e.g. a decisive search for sterile neutrinos through the IsoDAR experiment. For IsoDAR we go a step beyond just using [H2+]{}and add pre-bunching through a Radio-Frequency Quadrupole (RFQ) embedded in the cyclotron yoke. This puts beam purity and beam quality constraints on the ion source that no published ion source has simultaneously demonstrated so far. Here, we report results from a new multicusp ion source (MIST-1) that produces the world\u2019s highest steady-state current of [H2+]{}from this type of ion source (1\u00a0mA), with exceptionally low emittance (0.05\u00a0$\\pi$-mm-mrad, RMS, normalized) and high purity (80% [H2+]{}). This result shows the feasibility of using a multicusp ion source for IsoDAR and the RFQ direct injection prototype, and paves the way to record breaking continuous wave (cw) beam currents of 5\u00a0mA [H2+]{}(equivalent to 10\u00a0mA protons) from compact cyclotrons, ideal for underground installation. This represents a significant" +"---\nabstract: 'We develop a duality theory for the problem of maximising expected lifetime utility from inter-temporal wealth over an infinite horizon, under the minimal no-arbitrage assumption of No Unbounded Profit with Bounded Risk (NUPBR). We use only deflators, with no arguments involving equivalent martingale measures, so do not require the stronger condition of No Free Lunch with Vanishing Risk (NFLVR). Our formalism also works without alteration for the finite horizon version of the problem. As well as extending work of Bouchard and Pham [@bp04] to any horizon and to a weaker no-arbitrage setting, we obtain a stronger duality statement, because we do not assume by definition that the dual domain is the polar set of the primal space. Instead, we adopt a method akin to that used for inter-temporal consumption problems, developing a supermartingale property of the deflated wealth and its path that yields an infinite horizon budget constraint and serves to define the correct dual variables. The structure of our dual space allows us to show that it is convex, without forcing this property by assumption. We proceed to enlarge the primal and dual domains to confer solidity to them, and use supermartingale convergence results which exploit Fatou" +"---\nabstract: 'This manuscript studies the Minkowski\u2013Bellman equation, which is the Bellman equation arising from finite or infinite horizon optimal control of unconstrained linear discrete time systems with stage and terminal cost functions specified as Minkowski functions of proper $C$\u2013sets. In regards to the finite horizon optimal control, it is established that, under natural conditions, the Minkowski\u2013Bellman equation and its iteration are well posed. The characterization of the value functions and optimizer maps is derived. In regards to the infinite horizon optimal control, it is demonstrated that, under the same natural conditions, the fixed point of the Minkowski\u2013Bellman equation is unique, in terms of the value function, over the space of Minkowski functions of proper $C$\u2013sets. The characterization of the fixed point value function and optimizer map is reported.'\naddress: 'Beijing Institute of Technology, Beijing, China.'\nauthor:\n- 'Sa\u0161a\u00a0V.\u00a0Rakovi\u0107'\nbibliography:\n- 'MBE.bib'\ntitle: 'The Minkowski\u2013Bellman Equation'\n---\n\nLinear Dynamical Systems, Minkowski Functions and Bellman Equation.\n\nIntroduction {#sec:01}\n============\n\nDynamic programming\u00a0[@bellman:1957; @bellman:dreyfus:1962; @bertsekas:2005; @bertsekas:2018] is an indispensable mathematical technique for closed loop characterization of optimal control. The closed loop solution to finite horizon optimal control of unconstrained discrete time systems, induced by a state transition map $(x,u)\\mapsto f(x,u)$," +"---\nabstract: 'We develop a privacy-preserving distributed algorithm to minimize a regularized empirical risk function when the first-order information is not available and data is distributed over a multi-agent network. We employ a zeroth-order method to minimize the associated augmented Lagrangian function in the primal domain using the alternating direction method of multipliers (ADMM). We show that the proposed algorithm, named distributed zeroth-order ADMM (D-ZOA), has intrinsic privacy-preserving properties. Unlike the existing privacy-preserving methods based on the ADMM where the primal or the dual variables are perturbed with noise, the inherent randomness due to the use of a zeroth-order method endows D-ZOA with intrinsic differential privacy. By analyzing the perturbation of the primal variable, we show that the privacy leakage of the proposed D-ZOA algorithm is bounded. In addition, we employ the moments accountant method to show that the total privacy leakage grows sublinearly with the number of ADMM iterations. D-ZOA outperforms the existing differentially private approaches in terms of accuracy while yielding the same privacy guarantee. We prove that D-ZOA converges to the optimal solution at a rate of $\\mathcal{O}(1/M)$ where $M$ is the number of ADMM iterations. The convergence analysis also reveals a practically important trade-off between privacy" +"---\nabstract: 'Emotional expressiveness captures the extent to which a person tends to outwardly display their emotions through behavior. Due to the close relationship between emotional expressiveness and behavioral health, as well as the crucial role that it plays in social interaction, the ability to automatically predict emotional expressiveness stands to spur advances in science, medicine, and industry. In this paper, we explore three related research questions. First, how well can emotional expressiveness be predicted from visual, linguistic, and multimodal behavioral signals? Second, how important is each behavioral modality to the prediction of emotional expressiveness? Third, which behavioral signals are reliably related to emotional expressiveness? To answer these questions, we add highly reliable transcripts and human ratings of perceived emotional expressiveness to an existing video database and use this data to train, validate, and test predictive models. Our best model shows promising predictive performance on this dataset ($RMSE=0.65$, $R^2=0.45$, $r=0.74$). Multimodal models tend to perform best overall, and models trained on the linguistic modality tend to outperform models trained on the visual modality. Finally, examination of our interpretable models\u2019 coefficients reveals a number of visual and linguistic behavioral signals\u2014such as facial action unit intensity, overall word count, and use of" +"---\nabstract: 'We briefly present recent progress with our algorithm and its implementation called SIMPA described in a previous paper\u00a0[@BOJTAR2019162841]. The algorithm has a new and unique approach to long-term 4D tracking of charged particles in arbitrary static electromagnetic fields. Using the improvements described in this paper, we made frequency analysis and dynamic aperture studies in ELENA. The effect of the end fields and the perturbation introduced by the magnetic system of the electron cooler on dynamic aperture is shown. A special feature of this study is that we have not introduced any multipole errors into the model. The dynamic aperture calculated in this paper is the direct consequence of the geometry of the magnetic elements. Based on the results, we make a few suggestions to reduce the losses during the deceleration of the beam.'\nauthor:\n- Lajos Bojt\u00e1r\nbibliography:\n- 'dynap.bib'\ntitle: Frequency analysis and dynamic aperture studies in ELENA with realistic 3D magnetic fields \n---\n\n Introduction \n==============\n\nThe importance of fringe fields in small rings is well known and it has been taken into account for multipole magnets at various degrees for decades [@LEEWHITING1969305; @PhysRevSTAB.3.124001; @FOREST1988474; @PhysRevSTAB.18.064001]. The approach to particle tracking we described in\u00a0[@BOJTAR2019162841] naturally" +"---\nabstract: 'The Central Molecular Zone (CMZ) of our Galaxy hosts an extreme environment analogous to that found in typical starburst galaxies in the distant universe. In order to understand dust properties in environments like our CMZ, we present results from a joint SED analysis of our AzTEC/Large Millimeter Telescope survey, together with existing *Herschel* far-IR data on the CMZ, from a wavelength range of $160$ $\\mu m$ to $1.1$ $mm$. We include global foreground and background contributions in a novel Bayesian modeling that incorporates the Point Spread Functions (PSFs) of the different maps, which enables the full utilization of our high resolution ($10.5''''$) map at 1.1 $mm$ and reveals unprecedentedly detailed information on the spatial distribution of dusty gas across the CMZ. There is a remarkable trend of increasing dust spectral index $\\beta$, from $2.0-2.4$, toward dense peaks in the CMZ, indicating a deficiency of large grains or a fundamental change in dust optical properties. This environmental dependence of $\\beta$ could have a significant impact on the determination of dust temperature in other studies. Depending on how the optical properties of dust deviate from the conventional model, dust temperatures could be underestimated by $10-50\\%$ in particularly dense regions.'\nauthor:" +"---\nabstract: 'The polar Kerr effect in superconducting [Sr$_2$RuO$_4$]{}\u00a0implies finite ac anomalous Hall conductivity. Since intrinsic anomalous Hall effect (AHE) is not expected for a chiral superconducting pairing developed on the single Ru $d_{xy}$ orbital, multiorbital chiral pairing actively involving the Ru $d_{xz}$ and $d_{yz}$ orbitals has been proposed as a potential mechanism. Here we propose that AHE could still arise even if the chiral superconductivity is predominantly driven by the $d_{xy}$ orbital. This is demonstrated through two separate models which take into account subdominant orbitals in the Cooper pairing, one involving the oxygen $p_x$ and $p_y$ orbitals in the RuO$_2$ plane, and another the $d_{xz}$ and $d_{yz}$ orbitals. In both models, finite orbital mixing between the dominant $d_{xy}$ and the other orbitals may induce inter-orbital pairing between them, and the resultant states support intrinsic AHE, with Kerr rotation angles that could potentially reconcile with the experimental observation. Our proposal therefore sheds new light on the microscopic pairing in [Sr$_2$RuO$_4$]{}. We also show that intrinsic Hall effect is generally absent for non-chiral states such as $\\mathcal{S}+i\\mathcal{D}$, $\\mathcal{D}+i\\mathcal{P}$ and $\\mathcal{D}+i\\mathcal{G}$, which provides a clear constraint on the symmetry of the superconducting order in this material.'\naddress:\n- 'Kavli Institute for" +"---\nabstract: |\n The present article investigates the impact of muons on core-collapse supernovae, with particular focus on the early muon neutrino emission. While the presence of muons is well understood in the context of neutron stars, until the recent study by Bollig\u00a0[*et al.*]{}\u00a0\\[Phys.\u00a0Rev.\u00a0Lett.\u00a0**119**,\u00a0242702\u00a0(2017)\\] the role of muons in core-collapse supernovae had been neglected\u2014electrons and neutrinos were the only leptons considered. In their study, Bollig\u00a0[*et\u00a0al.*]{} disentangled the muon and tau neutrinos and antineutrinos and included a variety of muonic weak reactions, all of which the present paper follows closely. Only then does it becomes possible to quantify the appearance of muons shortly before stellar core bounce and how the post-bounce prompt neutrino emission is modified.\\\n \\\n DOI: 10.1103/PhysRevD.102.123001\\\nauthor:\n- Tobias Fischer\n- Gang Guo\n- 'Gabriel Mart[\u00ed]{}nez-Pinedo'\n- 'Matthias Liebend[\u00f6]{}rfer'\n- Anthony Mezzacappa\nbibliography:\n- 'references.bib'\ntitle: Muonization of supernova matter\n---\n\nIntroduction {#sec:intro}\n============\n\nA core-collapse supernova (SN) determines the final fate of all stars more massive than about 8\u00a0M$_\\odot$. The associated stellar core collapse is triggered due to deleptonization by nuclear electron capture in the core and the subsequent escape of the electron neutrinos produced, lowering the" +"---\nabstract: 'This paper introduces an efficient reactive routing protocol considering the mobility and the reliability of a node in Cognitive Radio Sensor Networks (CRSNs). The proposed protocol accommodates the dynamic behavior of the spectrum availability and selects a stable transmission path from a source node to the destination. Outlined as a weighted graph problem, the proposed protocol measures the weight for an edge the measuring the mobility patterns of the nodes and channel availability. Furthermore, the mobility pattern of a node is defined in the proposed routing protocol from the viewpoint of distance, speed, direction, and node\u2019s reliability. Besides, the spectrum awareness in the proposed protocol is measured over the number of shared common channels and the channel quality. It is anticipated that the proposed protocol shows efficient routing performance by selecting stable and secured paths from source to destination. Simulation is carried out to assess the performance of the protocol where it is witnessed that the proposed routing protocol outperforms existing ones.'\nauthor:\n- 'Email:[sharmin.akter2.cse@ulab.edu.bd$^{1}$, shahriar.rahman@ulab.edu.bd$^{2}$, nafees@ieee.org$^{3}$]{}'\ntitle: An Efficient Routing Protocol for Secured Communication in Cognitive Radio Sensor Networks\n---\n\nCognitive Radio Sensor Networks; V2V Communications; Ad Hoc Networks; WSN; Routing Protocol\n\nIntroduction\n============\n\nWith latest advancement" +"---\nabstract: |\n Five transport coefficients of the cuprate superconductor Bi$_2$Sr$_{2-x}$La$_x$CuO$_{6+\\delta}$ were measured in the normal state down to low temperature, reached by applying a magnetic field (up to 66\u00a0T) large enough to suppress superconductivity. The electrical resistivity, Hall coefficient, thermal conductivity, Seebeck coefficient and thermal Hall conductivity were measured in two overdoped single crystals, with La concentration $x = 0.2$ ([$T_{\\rm c}$]{}\u00a0$=18$\u00a0K) and $x = 0.0$ ([$T_{\\rm c}$]{}\u00a0$=10$\u00a0K). The samples have dopings $p$ very close to the critical doping [$p^{\\star}$]{}\u00a0where the pseudogap phase ends. The resistivity displays a linear dependence on temperature whose slope is consistent with Planckian dissipation. The Hall number [$n_{\\rm H}$]{}\u00a0decreases with reduced $p$, consistent with a drop in carrier density from $n = 1+p$ above [$p^{\\star}$]{}\u00a0to $n=p$ below [$p^{\\star}$]{}. This drop in [$n_{\\rm H}$]{}\u00a0is concomitant with a sharp drop in the density of states inferred from prior NMR Knight shift measurements. The thermal conductivity satisfies the Wiedemann-Franz law, showing that the pseudogap phase at $T = 0$ is a metal whose fermionic excitations carry heat and charge as do conventional electrons. The Seebeck coefficient diverges logarithmically at low temperature, a signature of quantum criticality. The thermal" +"---\nabstract: 'Using archival spectral-imaging data with a total exposure of $\\sim144$\u00a0ks obtained by [*Chandra*]{}, 43 X-ray sources are detected within the half-light radius of globular cluster M62 (NGC6266). Based on the X-ray colour-luminosity diagram or the positional coincidences with known sources, we have classified these sources into different groups of compact binaries including cataclysmic variable (CV), quiescent low mass X-ray binary (qLMXB), millisecond pulsar (MSP) and black hole (BH). Candidates of the X-ray counterparts of 12 CVs, 4 qLMXBs, 2 MSPs and 1 BH are identified in our analysis. The data used in our analysis consist of two frames separated by 12 years, which enable us to search for the long-term variability as well as the short-term X-ray flux variability within each observation window. Evidence for the short-term variability and long-term variability have been found in 7 and 12 sources respectively. For a number of bright sources with X-ray luminosities $L_{x}\\gtrsim 10^{32}$\u00a0erg/s, we have characterized their spectral properties in further details. By comparing the X-ray population in M62 with those in several other prototypical globular clusters, we found the proportion of bright sources is larger in M62 which can possibly be a result of their active dynamical" +"---\nabstract: 'Recommender systems (RS) suggest items-based on the estimated preferences of users. Recent RS methods utilise vector space embeddings and deep learning methods to make efficient recommendations. However, most of these methods overlook the sequentiality feature and consider each interaction, e.g., check-in, independent from each other. The proposed method considers the sequentiality of the interactions of users with items and uses them to make recommendations of a list of multi-item sequences. The proposed method uses FastText\u00a0[@bojanowski2016enriching], a well-known technique in natural language processing (NLP), to model the relationship among the subunits of sequences, e.g., tracks, playlists, and utilises the trained representation as an input to a traditional recommendation method. The recommended lists of multi-item sequences are evaluated by the ROUGE\u00a0[@lin2003automatic; @lin2004rouge] metric, which is also commonly used in the NLP literature. The current experimental results reveal that it is possible to recommend a list of multi-item sequences, in addition to the traditional next item recommendation. Also, the usage of FastText, which utilise sub-units of the input sequences, helps to overcome cold-start user problem. Even though current experimental results are promising, there are many missing pieces in the experimental section. In the future, I want to analyse and" +"---\nabstract: 'As a result of an increasingly automatized and digitized industry, processes are becoming more complex. Augmented Reality has shown considerable potential in assisting workers with complex tasks by enhancing user understanding and experience with spatial information. However, the acceptance and integration of AR into industrial processes is still limited due to the lack of established methods and tedious integration efforts. Meanwhile, deep neural networks have achieved remarkable results in computer vision tasks and bear great prospects to enrich Augmented Reality applications . In this paper, we propose an Augmented-Reality-based human assistance system to assist workers in complex manual tasks where we incorporate deep neural networks for computer vision tasks. More specifically, we combine Augmented Reality with object and action detectors to make workflows more intuitive and flexible. To evaluate our system in terms of user acceptance and efficiency, we conducted several user studies. We found a significant reduction in time to task completion in untrained workers and a decrease in error rate. Furthermore, we investigated the users learning curve with our assistance system.'\nauthor:\n- 'Linh K\u00e4stner$^{1}$, Leon Eversberg$^{1}$, Marina Mursa$^{1}$ and Jens Lambrecht$^{1}$[^1]'\nbibliography:\n- 'references.bib'\ntitle: '**Integrative Object and Pose to Task Detection for an Augmented-Reality-based" +"---\nabstract: 'Research on definition extraction has been conducted for well over a decade, largely with significant constraints on the type of definitions considered. In this work, we present DeftEval, a SemEval shared task in which participants must extract definitions from free text using a term-definition pair corpus that reflects the complex reality of definitions in natural language. Definitions and glosses in free text often appear without explicit indicators, across sentences boundaries, or in an otherwise complex linguistic manner. DeftEval involved 3 distinct subtasks: 1) Sentence classification, 2) sequence labeling, and 3) relation extraction.'\nauthor:\n- |\n Sasha Spala$^1$, Nicholas A Miller$^1$, Franck Dernoncourt$^2$, Carl Dockhorn$^1$\\\n $^1$Adobe Inc., $^2$Adobe Research\\\n 345 Park Ave\\\n San Jose, CA 95110-2704\\\n [{sspala,nimiller,dernonco,cdockhorn}@adobe.com]{}\\\nbibliography:\n- 'definition\\_extraction.bib'\ntitle: 'SemEval-2020 Task 6: Definition extraction from free text with the DEFT corpus'\n---\n\nIntroduction\n============\n\nDefinition extraction as a complex, real-world task is currently an emerging field of study. Traditional definition extraction approaches mostly rely on simple, syntactically straight-forward examples with relatively little variance in vocabulary. Corpora, including the WCL [@Navigli_data:2010] and ukWaC [@Ferraresi:2008], typically consist of \u201cdefinition sentences\" which follow a standard *X is a (type) Y* or *X, such as Y* syntactic structure. Many also contain" +"---\nauthor:\n- 'Titouan Lazeyras,'\n- 'Francisco Villaescusa-Navarro,'\n- Matteo Viel\nbibliography:\n- 'references.bib'\ntitle: The impact of massive neutrinos on halo assembly bias\n---\n\nIntroduction {#sec:intro}\n============\n\nIt is now well established that most of the observed tracers of the large-scale structure (LSS) of the Universe, such as galaxies, reside in dark matter halos. Hence the statistics of halos determine those of galaxies on large scales, making their distribution one of the key ingredients of the theoretical description of LSS. In the context of perturbation theory, the statistics of halos are written in terms of bias parameters multiplying operators constructed out of the matter density field (see [@Desjacques:2016] for a recent review). On sufficiently large scales, a linear relation between the halo density field ${\\delta}_h$ and the matter one ${\\delta}_m$ is enough to describe the clustering pattern of halos \\_h(,) = b\\_1() \\_m(,) + \u2026, \\[eq:localbias\\] where $\\delta_a$ is the density contrast of the considered field, defined as $\\delta_a=\\rho_a/\\bar{\\rho}_a-1$, $b_1$ is the linear bias of dark matter halos and the dots indicate that we only wrote the first term of the expansion.\n\nThis bias parameter was commonly thought to depend only on the redshift and mass of the considered" +"---\nabstract: 'Time series of photospheric magnetic parameters of solar active regions (ARs) are used to answer whether scaling properties of fluctuations embedded in such time series help to distinguish between flare-quiet and flaring ARs. We examine a total of 118 flare-quiet and 118 flaring AR patches (called HARPs), which were observed from 2010 to 2016 by the *Helioseismic and Magnetic Imager* (HMI) on board the *Solar Dynamics Observatory* (SDO). Specifically, the scaling exponent of fluctuations is derived applying the Detrended Fluctuation Analysis (DFA) method to a dataset of 8-day time series of 18 photospheric magnetic parameters at 12-min cadence for all HARPs under investigation. We first find a statistically significant difference in the distribution of the scaling exponent between the flare-quiet and flaring HARPs, in particular for some space-averaged, signed parameters associated with magnetic field line twist, electric current density, and current helicity. The flaring HARPs tend to show higher values of the scaling exponent compared to those of the flare-quiet ones, even though there is considerable overlap between their distributions. In addition, for both the flare-quiet and flaring HARPs the DFA analysis indicates that (1) time series of most of various magnetic parameters under consideration are non-stationary, and" +"---\nabstract: 'The effective-Lagrangian description of Lorentz-invariance violation provided by the so-called Standard-Model Extension covers all the sectors of the Standard Model, allowing for model-independent studies of high-energy phenomena that might leave traces at relatively-low energies. In this context, the quantification of the large set of parameters characterizing Lorentz-violating effects is well motivated. In the present work, effects from the Lorentz-nonconserving Yukawa sector on the electromagnetic moments of charged leptons are calculated, estimated, and discussed. Following a perturbative approach, explicit expressions of leading contributions are derived and upper bounds on Lorentz violation are estimated from current data on electromagnetic moments. Scenarios regarding the coefficients of Lorentz violation are considered. In a scenario of two-point insertions preserving lepton flavor, the bound on the electron electric dipole moment yields limits as stringent as $10^{-28}$, whereas muon and tau-lepton electromagnetic moments determine bounds as restrictive as $10^{-14}$ and $10^{-6}$, respectively. Another scenario, defined by the assumption that Lorentz-violating Yukawa couplings are Hermitian, leads to less stringent bounds, provided by the muon anomalous magnetic moment, which turn out to be as restrictive as $10^{-14}$.'\nauthor:\n- 'J. Alfonso Ahuatzi-Avenda\u00f1o$^a$, Javier Monta\u00f1o$^b$, H\u00e9ctor Novales-S\u00e1nchez$^a$, M\u00f3nica Salinas$^a$, and J. Jes\u00fas Toscano$^a$'\ntitle: 'Bounds on Lorentz-violating Yukawa" +"---\nabstract: 'In this work, we explore whether it is possible to learn representations of endoscopic video frames to perform tasks such as identifying surgical tool presence without supervision. We use a maximum mean discrepancy (MMD) variational autoencoder (VAE) to learn low-dimensional latent representations of endoscopic videos and manipulate these representations to distinguish frames containing tools from those without tools. We use three different methods to manipulate these latent representations in order to predict tool presence in each frame. Our fully unsupervised methods can identify whether endoscopic video frames contain tools with average precision of 71.56, 73.93, and 76.18, respectively, comparable to supervised methods. Our code is available at .'\nauthor:\n- 'David Z. Li'\n- Masaru Ishii\n- 'Russell H. Taylor'\n- 'Gregory D. Hager'\n- Ayushi Sinha\nbibliography:\n- 'mybib.bib'\ntitle: Learning Representations of Endoscopic Videos to Detect Tool Presence Without Supervision\n---\n\nIntroduction\n============\n\nDespite the abundance of medical image data, progress in learning from such data has been impeded by the lack of labels and the difficulty in acquiring accurate labels. With increase in minimally invasive procedures\u00a0[@Tsui13], an increasing number of endoscopic videos (Fig.\u00a0\\[fig:examples\\]) are available. This can open up the opportunity for video-based" +"---\nabstract: 'The discrete truncated Wigner approximation (DTWA) is a powerful tool for analyzing dynamics of quantum spin systems. Since the DTWA includes the leading-order quantum corrections to a mean-field approximation, it is naturally expected that the DTWA becomes more accurate when the range of interactions of the system increases. However, quantitative corroboration of this expectation is still lacking mainly because it is generally difficult in a large system to evaluate a timescale on which the DTWA is quantitatively valid. In order to investigate how the validity timescale depends on the interaction range, we analyze dynamics of quantum spin models with a step function type interaction subjected to a sudden quench of a magnetic field by means of both DTWA and its extension including the second-order correction, which is derived from the Bogoliubov-Born-Green-Kirkwood-Yvon equation. We also develop a formulation for calculating the second-order R\u00e9nyi entropy within the framework of the DTWA. By comparing the time evolution of the R\u00e9nyi entropy computed by the DTWA with that by the extension including the correction, we find that both in the one- and two-dimensional systems the validity timescale increases algebraically with the range of the step function type interaction.'\nauthor:\n- Masaya Kunimi" +"---\nbibliography:\n- 'nn\\_fc\\_spherconv.bib'\ntitle: 'Physics-inspired adaptions to low-parameter neural network weather forecasts systems'\n---\n\nIntroduction\n============\n\nWeather forecasting has for decades been dominated by numerical models built on physical principles, the so-called Numerical Weather Prediction Models (NWP). These models have seen a constant increase in skill over time [@bauer_quiet_2015]. Recently, however, there has been a surge of interest in data-driven weather forecasting in the medium-range ($\\sim$2-14 days ahead). These have often - but not exclusively, used neural networks (e.g. @scher_toward_2018 [@scher_weather_2019-1; @dueben_challenges_2018; @weyn_can_2019; @weyn_improving_2020; @faranda2021enhancing; @scher_ensemble_2020; @rasp2021data; @bi2022panguweather; @keisler_forecasting_2022; @pathak_fourcastnet_2022; @chen2023fengwu; @lam2023graphcast; @benbouallegue2023rise] ), also in combination with physics-based models (e.g. @arcomano2022hybrid). A historic overview of paradigms in weather prediction, is outlined in @balaji_climbing_2020. The use of convolutional neural networks (CNNs) (e.g. @scher_toward_2018 [@scher_weather_2019-1; @weyn_can_2019; @rasp2021data]) or of a local network that is shared across the domain [@dueben_challenges_2018], dominated in the early data-driven approaches. What these methods have in common is that they use global data on a regular lat-lon grid. This leads to distortions, especially close to the poles [@coors_spherenet_2018]. However, a standard convolution or shared local architecture does not take such distortion into account since it uses a filter whose size is a fixed number of gridpoints" +"---\nabstract: 'The ability to characterize the state of dynamic systems has been a pertinent task in the time series analysis community. Traditional measures such as Lyapunov exponents are often times difficult to recover from noisy data, especially if the dimensionality of the system is not known. More recent binary and network based testing methods have delivered promising results for unknown deterministic systems, however noise injected into a periodic signal leads to false positives. Recently, we showed the advantage of using persistent homology as a tool for achieving dynamic state detection for systems with no known model and showed its robustness to white Gaussian noise. In this work, we explore the robustness of the persistence based methods to the influence of colored noise and show that colored noise processes of the form $1/f^{\\alpha}$ lead to false positive diagnostic at lower signal to noise ratios for $\\alpha<0$.'\nauthor:\n- 'Joshua R.\u00a0Tempelman'\n- 'Audun D.\u00a0Myers'\n- 'Jeffrey T.\u00a0Scruggs'\n- 'Firas A.\u00a0Khasawneh'\nbibliography:\n- 'IDETC2020.bib'\ntitle: Effects of Correlated Noise on the Performance of Persistence Based Dynamic State Detection Methods\n---\n\nIntroduction\n============\n\nThe distinction between regular and chaotic dynamics has been a thoroughly researched topic in the fields" +"---\nauthor:\n- 'Y. Lebreton'\n- 'D. R. Reese'\nbibliography:\n- 'article.bib'\ndate: 'Received 8 June 2020 ; accepted July 2020 '\nsubtitle: 'A public Python tool to age-date, weigh, size up stars, and more'\ntitle: 'SPInS, a pipeline for massive stellar parameter inference'\n---\n\n[Stellar parameters are required in a variety of contexts, ranging from the characterisation of exoplanets to Galactic archaeology. Among them, the age of stars cannot be directly measured, while the mass and radius can be measured in some particular cases (e.g. binary systems, interferometry). More generally, stellar ages, masses, and radii have to be inferred from stellar evolution models by appropriate techniques.]{} [We have designed a Python tool named SPInS. It takes a set of photometric, spectroscopic, interferometric, and/or asteroseismic observational constraints and, relying on a stellar model grid, provides the age, mass, and radius of a star, among others, as well as error bars and correlations. We make the tool available to the community via a dedicated website.]{} [SPInS uses a Bayesian approach to find the probability distribution function of stellar parameters from a set of classical constraints. At the heart of the code is a Markov Chain Monte Carlo solver coupled with interpolation" +"---\nabstract: 'Graph convolutional neural networks (GCNNs) have received much attention recently, owing to their capability in handling graph-structured data. Among the existing GCNNs, many methods can be viewed as instances of a neural message passing motif; features of nodes are passed around their neighbors, aggregated and transformed to produce better nodes\u2019 representations. Nevertheless, these methods seldom use node transition probabilities, a measure that has been found useful in exploring graphs. Furthermore, when the transition probabilities are used, their transition direction is often improperly considered in the feature aggregation step, resulting in an inefficient weighting scheme. In addition, although a great number of GCNN models with increasing level of complexity have been introduced, the GCNNs often suffer from over-fitting when being trained on small graphs. Another issue of the GCNNs is over-smoothing, which tends to make nodes\u2019 representations indistinguishable. This work presents a new method to improve the message passing process based on node transition probabilities by properly considering the transition direction, leading to a better weighting scheme in nodes\u2019 features aggregation compared to the existing counterpart. Moreover, we propose a novel regularization method termed *DropNode* to address the over-fitting and over-smoothing issues simultaneously. DropNode randomly discards part of a" +"---\nauthor:\n- |\n [Siliang Tang\\*, Qi Zhang\\*, Tianpeng Zheng\\*, Mengdi Zhou\\*, Zhan Chen\\*\\*, Lixing]{}\\\n [Shen\\*\\*\\*, Xiang Ren\\*\\*\\*, Yueting Zhuang\\*, Shiliang Pu\\*\\* and Fei Wu\\*]{}\\\n [\\*Zhejiang University]{}\\\n {siliang, zhangqihit, 21721120, 21721125, yzhuang, wufei}@zju.edu.cn\\\n [\\*\\*Hikvision]{}\\\n {chenzhan, shenlixing, pushiliang}@hikvision.com\\\n [\\*\\*\\*University of Southern California]{}\\\n xiangren@usc.edu\n- Siliang Tang\n- Qi Zhang\n- Tianpeng Zheng\n- Mengdi Zhou\n- Zhan Chen\n- Lixing Shen\n- Xiang Ren\n- Yueting Zhuang\n- Shiliang Pu\n- Fei Wu\nbibliography:\n- 'acl2018.bib'\ntitle: Two Step Joint Model for Drug Drug Interaction Extraction\n---\n\nAbstract {#abstract .unnumbered}\n========\n\nWhen patients need to take medicine, particularly taking more than one kind of drug simultaneously, they should be alarmed that there possibly exists drug-drug interaction. Interaction between drugs may have a negative impact on patients or even cause death. Generally, drugs that conflict with a specific drug (or label drug) are usually described in its drug label or package insert. Since more and more new drug products come into the market, it is difficult to collect such information by manual. We take part in the Drug-Drug Interaction\u00a0(DDI) Extraction from Drug Labels challenge of Text Analysis Conference\u00a0(TAC) 2018, choosing task1 and task2 to automatically extract DDI related mentions and" +"---\nabstract: 'The paper addresses compact oscillatory states (compact breathers) in translationally-invariant lattices with flat dispersion bands. The compact breathers appear in such systems even in the linear approximation. If the interactions are nonlinear, but comply with the flat-band symmetry, the compact breather solutions exist, but can lose their stability for certain parameter values. As benchmark nonlinear potentials, we use the $\\beta$-FPU (Fermi-Pasta-Ulam) and vibro-impact models. Loss of stability is numerically observed to occur through either pitchfork or Hopf bifurcations. The loss of stability can occur through two qualitatively different mechanisms \u2013 through internal instability in the basic lattice elements, or through interaction of the compact breather with the linear passband of the lattice. The former scenario is more typical for high-amplitude breathers, and the latter \u2013 for low amplitudes. For the high-amplitude case, insights into the nature of compact-mode loss-of-stability are obtained by resorting to the limit of a piecewise-linear system, where interactions are represented by conservative impacts. This issue calls for detailed introspection into integrability of piecewise-linear (impacting) systems and their relation to the smooth system. An idea for a sensor based on the studied mechanisms is suggested.'\naddress: 'Faculty of Mechanical Engineering, Technion, Haifa 32000, Israel'\nauthor:" +"---\nabstract: |\n We use the results of Moser and Kn\u00f6rrer on relations between geodesics on quadrics and solutions of the classical Neumann system to describe explicitly the geodesic scattering on hyperboloids.\n\n We explain the relation of Kn\u00f6rrer\u2019s reparametrisation with projectively equivalent metrics on quadrics introduced by Tabachnikov and independently by Matveev and Topalov, giving a new proof of their result. We show that the projectively equivalent metric is regular on the projective closure of hyperboloids and extend Kn\u00f6rrer\u2019s map to this closure.\naddress:\n- 'Department of Mathematical Sciences, Loughborough University, Loughborough LE11 3TU, UK, Moscow State University and Steklov Mathematical Institute, Moscow, Russia'\n- |\n School of Mathematical Science\\\n Huaqiao University, Quanzhou, Fujian, P. R. China 362021\nauthor:\n- 'A.P. Veselov'\n- 'L. Wu'\ntitle: 'Geodesic scattering on hyperboloids and Kn\u00f6rrer\u2019s map'\n---\n\nIntroduction\n============\n\nGeodesic flows on ellipsoids have been a subject of substantial interest since the fundamental work of Jacobi [@Jac] (see in particular, discussion in Arnold\u2019s book [@A]). Moser reinvigorated this area in 1970s by showing the deep relation to both classical geometry, spectral and soliton theory [@M1; @M2; @M3].\n\nThe geodesic flow on the hyperboloids, to the best of our knowledge, was not studied in" +"---\nabstract: 'We explore the method of old quantization as applied to states with nonzero angular momentum, and show that it leads to qualitatively and quantitatively useful information about systems with spherically symmetric potentials. We begin by reviewing the traditional application of this model to hydrogen, and discuss the way Einstein-Brillouin-Keller quantization resolves a mismatch between old quantization states and true quantum mechanical states. We then analyze systems with logarithmic and Yukawa potentials, and compare the results of old quantization to those from solving Schr\u00f6dinger\u2019s equation. We show that the old quantization techniques provide insight into the spread of energy levels associated with a given principal quantum number, as well as giving quantitatively accurate approximations for the energies. Analyzing systems in this manner involves an educationally valuable synthesis of multiple numerical methods, as well as providing deeper insight into the connections between classical and quantum mechanical physics.'\nauthor:\n- Nelia\u00a0Mann\n- Jessica\u00a0Matli\n- Tuan\u00a0Pham\ntitle: 'Old Quantization, Angular Momentum, and Nonanalytic Problems'\n---\n\nIntroduction\n============\n\nThe origins of quantum mechanics are usually dated to 1905, with the publication of Einstein\u2019s work on the photoelectric effect [@photoelectric], even though the study of Schr\u00f6dinger\u2019s equation and matrix mechanics\u2014what we" +"---\nabstract: 'We revisit the signatures from collisions of cosmic-rays on sub-GeV dark matter (DM) in the Milky Way. In addition to the upscattered DM component that can be probed by existing DM and neutrino experiments widely discussed, we examine the associated signals in $\\gamma$-rays and neutrinos that span a wide energy range due to the inelastic scatterings. Assuming a simple vector portal DM model for illustration, we compute both the upscattered DM flux by cosmic-ray protons, and the resulting emission of secondary $\\gamma$-rays and high-energy neutrinos from proton excitation, hadronization, and the subsequent meson decay. We derive limits on coupling constants in the vector portal model using data from the $\\gamma$-ray and high-energy neutrino telescopes including Fermi, H.E.S.S. and IceCube. These limits are compared to those obtained by considering the upscattered DM signals at the low-energy DM/neutrino detectors XENON1T/MiniBooNE and the IceCube. For this particular model, the limits are set predominantly by non-detection of the upscattered DM events in XENON1T, for most of the DM mass range due to the large scattering cross section at low energies. Nevertheless, our study demonstrates that the $\\gamma$-ray and neutrino signals, traditionally considered as indirect probes for DM annihilation and decay, can also" +"---\nauthor:\n- 'Yongxin\u00a0Liu, Jian\u00a0Wang, Jianqiang\u00a0Li, Houbing\u00a0Song,\u00a0 Thomas\u00a0Yang,\u00a0 Shuteng\u00a0Niu and Zhong\u00a0Ming [^1][^2][^3] [^4]'\nbibliography:\n- 'ReviewRef.bib'\ntitle: 'Zero-Bias Deep Learning for Accurate Identification of Internet of Things (IoT) Devices'\n---\n\n[Shell : Bare Demo of IEEEtran.cls for Journals]{}\n\nIntroduction\n============\n\nThe Internet of Things (IoT) is characterized by the interconnection and interaction of smart objects (objects or devices with embedded sensors, onboard data processing capability, and a means of communication) to provide applications and services that would otherwise not be possible [@IIOT17]. The convergence of sensor, actuator, information, and communication technologies in IoT produces massive amounts of data that need to be sifted through to facilitate reasonably accurate decision-making and control [@BDA19]. Big data analytics has the potential to enable the move from IoT to real-time control [@7406686]. However, due to the open nature of IoT, IoT is subject to cybersecurity threats [@8897627; @liu2019domain]. One typical cybersecurity threat is identity spoofing attacks where an adversary passively collect information and then mimic the identity of legitimate devices to send fake information or conduct other malicious activities. Such attacks can be extremely dangerous when appear in critical infrastructures [@skarmeta2014decentralized].\n\nConventional approaches to prevent identity spoofing" +"---\nabstract: 'High-energy neutral and charged Drell\u2013Yan differential cross-section measurements are powerful probes of quark-lepton contact interactions that produce growing-with-energy effects. This paper provides theoretical predictions of the new physics effects at the Next-to-Leading order in QCD and including one-loop EW corrections at the single-logarithm accuracy. The predictions are obtained from SM Monte Carlo simulations through analytic reweighting. This eliminates the need of performing a scan on the new physics parameter space, enabling the global exploration of all the relevant interactions. Furthermore, our strategy produces consistently showered events to be employed for a direct comparison of the new physics predictions with the data, or to validate the unfolding procedure than underlies the cross-section measurements. Two particularly relevant interactions, associated with the W and Y parameters of EW precision tests, are selected for illustration. Projections are presented for the sensitivity of the LHC and of the HL-LHC measurements. The impact on the sensitivity of several sources of uncertainties is quantified.'\nauthor:\n- |\n Riccardo Torre$^{a,b}$, Lorenzo Ricci$^{c}$, Andrea Wulzer$^{b,c,d}$\\\n \\\n [*$^a$ INFN, Sezione di Genova, Via Dodecaneso 33, I-16146 Genova, Italy*]{}\\\n [*$^b$ CERN, 1211 Geneva 23, Switzerland*]{}\\\n [*$^c$ Theoretical Particle Physics Laboratory (LPTP), Institute of Physics,*]{}\\\n [*EPFL, Lausanne, Switzerland*]{}\\\n [*$^d$ Dipartimento" +"---\nabstract: 'In the world of building acoustics, a standard tapping machine has long existed for the purpose of replicating and regulating impact noise. However there still exist other kinds of structure-borne noise which could benefit from being considered when designing a building. One of these types of sources is rolling noise. This report details a proposal for defining a standard rolling noise machine. Just as the standard tapping machine can be used in any building and on any surface as a way of characterizing and comparing the performance of various floors with respect to impact noise, the development of a standard rolling device would enable the same evaluation and comparison to be made with respect to rolling noise. The hope is that such a prototype may serve as a launch pad for further development, spurring future discussion and criticism on the topic by others who may wish to aid in the pursuit of a truly standardized rolling noise machine.'\nauthor:\n- 'M. Edwards[^1]'\n- 'R. Gonzalez Diaz'\n- 'N. Dallaji'\n- 'L. Jaouen'\nbibliography:\n- 'RecommendationStandardDevice.bib'\ndate: July 2020\ntitle: Recommendation for a Standard Rolling Noise Machine\n---\n\nIntroduction {#sec:RollingMachine_introduction}\n============\n\nIn the world of building acoustics, a standard" +"---\nabstract: 'I propose a state of the art deep neural architectural solution for handwritten character recognition for Bengali alphabets, compound characters as well as numerical digits that achieves state-of-the-art accuracy 96.8% in just 11 epochs. Similar work has been done before by Chatterjee, Swagato, et al.[@kingshuk; @chatterhejee] but they achieved 96.12% accuracy in about 47 epochs. The deep neural architecture used in that paper was fairly large considering the inclusion of the weights of the ResNet 50 model which is a 50 layer Residual Network. This proposed model achieves higher accuracy as compared to any previous work & in a little number of epochs. ResNet50 is a good model trained on the ImageNet dataset, but I propose an HCR network that is trained from the scratch on Bengali characters without the \u201cEnsemble Learning\u201d that can outperform previous architectures.'\nauthor:\n- 'Roy, Akash'\ntitle: 'AKHCRNet: Bengali handwritten character recognition using deep learning'\n---\n\nIntroduction\n============\n\nMotivation\n----------\n\nBengali is one of the official languages of the Republic of India and the official language of Bangladesh. About 230 million people speak and write Bengali as a native language around the world. So recognition of Bengali character is an important problem needed" +"---\nabstract: 'The paper addresses an existence problem for infinite horizon optimal control when the system under control is exponentially stabilizable or stable. Classes of nonlinear control systems for which infinite horizon optimal controls exist are identified in terms of stability, stabilizability, detectability and growth conditions. The result then applies to estimate the existence region of stable manifolds in the associated Hamiltonian systems. Applications of the results also include the analysis for turnpike property in nonlinear finite horizon optimal control problems by a geometric approach.'\nauthor:\n- 'Noboru Sakamoto[^1]'\ntitle: 'When does stabilizability imply the existence of infinite horizon optimal control in nonlinear systems?[^2]'\n---\n\n[tmp\\_\\_abstract.tex]{}\n\nOptimal control, Stability, Stabilizability, Detectability, Stable manifold, Turnpike.\n\n49K15, 49J15, 93D20, 93C10\n\nIntroduction\n============\n\nOptimal control problems (OCPs) are of significance from mathematical and engineering viewpoints, as applications and extensions of Calculus of Variations as well as design tools for systems describing engineering processes. There are two approaches to OCPs, one from [[the]{}]{} sufficiency of optimality (Dynamic Programming [@Bellman:57:DP] developed by Bellman) and the other from necessity (Maximum Principle [@Pontryagin:62:MTOP] developed by Pontryagin). We refer to [@Athans:66:OC; @Bryson:75:AOC; @Cesari:83:OTA; @Liberzon:12:CVOCT] for the theory of OCPs and to [@Bryson:96:cssmag] for a survey on OCPs from" +"---\nabstract: 'In this paper we show that the $\\mathrm{K}$-homology groups of a separable C\\*-algebra can be enriched with additional descriptive set-theoretic information, and regarded as *definable groups*. Using a definable version of the Universal Coefficient Theorem, we prove that the corresponding *definable* $\\mathrm{K}$-homology is a finer invariant than the purely algebraic one, even when restricted to the class of UHF C\\*-algebras, or to the class of unital commutative C\\*-algebras whose spectrum is a $1$-dimensional connected subspace of $\\mathbb{R}^{3}$.'\naddress: |\n School of Mathematics and Statistics\\\n Victoria University of Wellington\\\n PO Box 600, 6140 Wellington, New Zealand\nauthor:\n- Martino Lupini\nbibliography:\n- 'cohomology-BE2.bib'\ntitle: 'Definable $\\mathrm{K}$-homology of separable C\\*-algebras'\n---\n\n[^1]\n\nIntroduction {#introduction .unnumbered}\n============\n\nGiven a compact metrizable space $X$, the group $\\mathrm{Ext}\\left( X\\right) \n$ classifying extensions of the C\\*-algebra $C\\left( X\\right) $ by the C\\*-algebra $K\\left( H\\right) $ of compact operators was initially considered by Brown, Douglas, and Fillmore in their celebrated work [brown\\_extensions\\_1977]{}. There, they showed that $\\mathrm{Ext}\\left(\n-\\right) $ is indeed a group, and that defining, for a compact metrizable space $X$, $$\\mathrm{\\tilde{K}}_{p}\\left( X\\right) :=\\left\\{ \n\\begin{array}{ll}\n\\mathrm{Ext}\\left( X\\right) & \\text{if }p\\text{ is odd,} \\\\ \n\\mathrm{Ext}\\left( \\Sigma X\\right) & \\text{if }p\\text{ is even;}%\n\\end{array}%\n\\right.$$where $\\Sigma" +"---\nabstract: 'We propose generalizations of the Hotelling\u2019s $T^2$ statistic and the Bhattacharayya distance for data taking values in Lie groups. A key feature of the derived measures is that they are compatible with the group structure even for manifolds that do not admit any bi-invariant metric. This property, e.g. assures analysis that does not depend on the reference shape, thus, preventing bias due to arbitrary choices thereof. Furthermore, the generalizations agree with the common definitions for the special case of flat vector spaces guaranteeing consistency. Employing a permutation test setup, we further obtain nonparametric, two-sample testing procedures that themselves are bi-invariant and consistent. We validate our method in group tests revealing significant differences in hippocampal shape between individuals with mild cognitive impairment and normal controls.'\nauthor:\n- Martin Hanik\n- 'Hans-Christian Hege'\n- Christoph von Tycowicz\nbibliography:\n- 'bibliography.bib'\ntitle: |\n Bi-invariant Two-Sample Tests in Lie Groups\\\n for Shape Analysis\\\n---\n\nIntroduction\n============\n\nShape analysis is applied successfully in a variety of different fields and is further fuelled by the ongoing advance of 3D imaging technology\u00a0[@AmbellanLameckervonTycowiczetal.2019]. Although the objects themselves are embedded in Euclidean space the resulting shape data is often part of a complex nonlinear manifold. Thus," +"---\nabstract: 'The destruction of a star by the tides of a supermassive black hole (SMBH) powers a bright accretion flare, and the theoretical modeling of such tidal disruption events (TDEs) can provide a direct means of inferring SMBH properties from observations. Previously it has been shown that TDEs with $\\beta = r_{\\rm t}/r_{\\rm p} = 1$, where $r_{\\rm t}$ is the tidal disruption radius and $r_{\\rm p}$ is the pericenter distance of the star, form an in-plane caustic, or \u201cpancake,\u201d where the tidally disrupted debris is compressed into a one-dimensional line within the orbital plane of the star. Here we show that this result applies generally to all TDEs for which the star is fully disrupted, i.e., that satisfy $\\beta \\gtrsim 1$. We show that the location of this caustic is always outside of the tidal disruption radius of the star and the compression of the gas near the caustic is at most mildly supersonic, which results in an adiabatic increase in the gas density above the tidal density of the black hole. As such, this in-plane pancake revitalizes the influence of self-gravity even for large $\\beta$, in agreement with recent simulations. This finding suggests that for all TDEs" +"---\naddress:\n- ', , '\n- ', , '\n- ', , '\n- ', , . The work of Marcel Schweitzer was partly supported by the SNSF research project *Low-rank updates of matrix functions and fast eigenvalue solvers*.'\nauthor:\n- Peter Kandolf\n- Antti Koskela\n- 'Samuel D. Relton'\n- 'Marcel Schweitzer\\*'\nbibliography:\n- 'matrixfunctions.bib'\ntitle: 'Computing low-rank approximations of the Fr\u00e9chet derivative of a matrix function using Krylov subspace methods'\n---\n\nIntroduction {#sec:introduction}\n============\n\nMatrix functions $f \\colon \\mathbb{C}^{n \\times n} \\rightarrow \\mathbb{C}^{n \\times n}$ are an increasingly important part of applied mathematics with a wide variety of applications. The matrix exponential, $f(A) = e^A$, arises in network analysis\u00a0[@EstradaHigham2010] and exponential integrators\u00a0[@HochbruckLubich1997; @HochbruckOstermann2010; @HochbruckLubichSelhofer1998]; whilst the matrix logarithm, $f(A) = \\log(A)$, occurs in models of bladder carcinoma\u00a0[@gsrp14] and when computing the matrix geometric mean\u00a0[@jvv12].\n\nAlso of importance is the \u00a0of a matrix function, defined as the unique operator $L_f(A, \\cdot)\\colon \\mathbb{C}^{n\\times n} \\rightarrow \\mathbb{C}^{n\\times n}$ that is linear in its second argument and, for any matrix $E \\in \\mathbb{C}^{n \\times n}$, satisfies $$\\nonumber\n \\label{eq.FD_defn}\n f(A + E) - f(A) = L_f(A,E) + o(\\|{E}\\|),$$ where $\\|\\cdot\\|$ denotes the matrix two-norm and $o(\\|E\\|)$ represents a" +"---\nabstract: 'In this work, an advanced motion controller is proposed for buck converter-fed DC motor systems. The design is based on an idea of active disturbance rejection control (ADRC) with its key component being a custom observer capable of reconstructing various types of disturbances (including complex, harmonic signals). A special formulation of the proposed design allows the control action to be expressed in a concise and practically appealing form reducing its implementation requirements. The obtained experimental results show increased performance of the introduced approach over conventionally used methods in tracking precision and disturbance rejection, while keeping similar level of energy consumption. A stability analysis using theory of singular perturbation further supports the validity of proposed control approach.'\naddress:\n- 'Energy Electricity Research Center, International Energy College, Jinan University, 206 Qianshan Road, Zhuhai, Guangdong, 519070 P.\u00a0R.\u00a0China'\n- 'Institute of Automation and Robotics, Poznan University of Technology, Poznan, Poland'\n- 'Military Academy, University of Defense, Belgrade, Serbia'\n- 'Department of Mathematics, Cleveland State University, Cleveland, OH, USA'\n- 'School of Automation, Southeast University, Key Laboratory of Measurement and Control of CSE, Ministry of Education, Nanjing, P.\u00a0R.\u00a0China'\nauthor:\n- Rafal\u00a0Madonski\n- 'Krzysztof\u00a0[\u0141]{}akomy'\n- Momir\u00a0Stankovic\n-" +"---\nabstract: 'A stabilizer free weak Galerkin (WG) finite element method on polytopal mesh has been introduced in Part I of this paper (J. Comput. Appl. Math, 371 (2020) 112699. arXiv:1906.06634.) Removing stabilizers from discontinuous finite element methods simplifies formulations and reduces programming complexity. The purpose of this paper is to introduce a new WG method without stabilizers on polytopal mesh that has convergence rates one order higher than optimal convergence rates. This method is the first WG method that achieves superconvergence on polytopal mesh. Numerical examples in 2D and 3D are presented verifying the theorem.'\nauthor:\n- 'Xiu Ye[^1]'\n- 'Shangyou Zhang[^2]'\ntitle: 'A stabilizer free weak Galerkin finite element method on polytopal mesh: Part II'\n---\n\nweak Galerkin finite element methods, second-order elliptic problems, polytopal meshes\n\nPrimary: 65N15, 65N30; Secondary: 35J50\n\nIntroduction {#Section:Introduction}\n============\n\nA stabilizing/penalty term is often used in finite element methods with discontinuous approximations to enforce connection of discontinuous functions across element boundaries. Removing stabilizers from discontinuous finite element method is desirable since it simplifies formulation and reduces programming complexity. A stabilizer free weak Galerkin finite element has been developed in [@yz-sf-wg] for the following model problem: seek an unknown function $u$ satisfying $$\\begin{aligned}\n-\\Delta" +"---\nabstract: 'Spin-orbit interactions (SOI) are a set of sub-wavelength optical phenomenon in which spin and spatial degrees of freedom of light are intrinsically coupled. One of the unique example of SOI, spin-Hall effect of light (SHEL) has been an area of extensive research with potential applications in spin controlled photonic devices as well as emerging fields of spinoptics and spintronics. Here, we report our experimental study on SHEL due to forward scattering of focused linearly polarized Gaussian and Hermite-Gaussian ($\\textrm{HG}_{10}$) beams from a silver nanowire (AgNW). Spin dependent anti-symmetric intensity patterns are obtained when the polarization of the scattered light is analysed. The corresponding spin-Hall signal is obtained by computing the far-field longitudinal spin density ($s_3$). Furthermore, by comparing the $s_3$ distributions, significant enhancement of the spin-Hall signal is found for $\\textrm{HG}_{10}$ beam compared to Gaussian beam. The investigation of the optical fields at the focal plane of the objective lens reveals the generation of longitudinally spinning fields as the primary reason for the effects. The experimental results are corroborated by 3-dimensional numerical simulations. The results lead to better understanding of SOI and can have direct implications on chip-scale spin assisted photonic devices.'\nauthor:\n- Diptabrata\n- 'Deepak K.'" +"---\nabstract: 'Here, we study the electrical transport and specific heat in 4$d$ based ferromagnetic material SrRuO$_3$ and its Ti substituted SrRu$_{1-x}$Ti$_x$O$_3$ series ($x$ $\\le$ 0.7). The SrRuO$_3$ is a metal and shows itinerant ferromagnetism with transition temperature $T_c$ $\\sim$ 160 K. The nonmagnetic Ti$^{4+}$ (3$d^0$) substitution would not only weaken the active Ru-O-Ru channel but is also expected to tune the electronic density and electron correlation effect. A metal to insulator transition has been observed around $x$ $\\sim$ 0.4. The nature of charge transport in paramagnetic-metallic state ($x$ $\\leq$ 0.4) and in insulating state ($x$ $>$ 0.4) follows modified Mott\u2019s variable range hopping model. In ferromagnetic-metallic state, resistivity shows a $T^2$ dependence below $T_c$ which though modifies to $T^{3/2}$ dependence at low temperature. In Ti substituted samples, temperature range for $T^{3/2}$ dependence extends to higher temperature. Interestingly, this $T^{3/2}$ dependence dominates in whole ferromagnetic regime in presence of magnetic field. This evolution of electronic transport behavior can be explained within the framework of Fermi liquid theory and electron-magnon scattering mechanism. The negative magnetoresistance exhibits a hysteresis and a crossover between negative and positive value with magnetic field which is connected with magnetic behavior in series. The decreasing electronic coefficient" +"---\nabstract: 'We introduce a new method to evaluate algebraic integrals over the simplex numerically. This new approach employs techniques from tropical geometry and exceeds the capabilities of existing numerical methods by an order of magnitude. The method can be improved further by exploiting the geometric structure of the underlying integrand. As an illustration of this, we give a specialized integration algorithm for a class of integrands that exhibit the form of a generalized permutahedron. This class includes integrands for scattering amplitudes and parametric Feynman integrals with tame kinematics. A proof-of-concept implementation is provided with which Feynman integrals up to loop order $17$ can be evaluated.'\nauthor:\n- 'Michael Borinsky[^1]'\ntitle: Tropical Monte Carlo quadrature for Feynman integrals\n---\n\nIntroduction\n============\n\nMotivation\n----------\n\nFeynman integrals are ubiquitous in various branches of theoretical physics. They are hard to evaluate and predictions for particle physics experiments rely heavily on them. Their evaluation even poses a bottleneck for the analysis of the data from some high accuracy experiments [@Heinrich:2020ybq]. This situation has fostered the development of extremely sophisticated and specialized technologies aimed to obtain a manageable analytic expression for a given Feynman integral. The state of the art technique is the *differential equation" +"---\nabstract: 'The main result of this paper is that one cannot hear orientability of a surface with boundary. More precisely, we construct two isospectral flat surfaces with boundary with the same Neumann spectrum, one orientable, the other non-orientable. For this purpose, we apply Sunada\u2019s and Buser\u2019s methods in the framework of orbifolds. Choosing a symmetric tile in our construction, and adapting a folklore argument of Fefferman, we also show that the surfaces have different Dirichlet spectra. These results were announced in the [*C. R. Acad. Sci. Paris S\u00e9r. I Math.*]{}, volume 320 in 1995, but the full proofs so far have only circulated in preprint form.'\nauthor:\n- 'Pierre B\u00e9rard and David L. Webb'\ntitle: 'One can\u2019t hear orientability of surfaces'\n---\n\nKeywords: [Spectrum, Laplacian, Isospectral surfaces, Orientability]{}\\\nMSC\u00a02010: [58J50, 58J32]{}\\\n\nIntroduction\n============\n\nLet $M$ be a compact Riemannian manifold with boundary. The [*spectrum*]{} of $M$ is the sequence of eigenvalues of the Laplace-Beltrami operator $\\Delta f= -{\\mathop{\\rm div}\\nolimits}({\\mathop{\\rm grad}\\nolimits}f)$ acting on smooth functions on $M$; when $\\partial M \\ne \\varnothing$, one can impose either Dirichlet boundary conditions on the function $f$ (i.e., $f|_{\\partial M}=0$) or Neumann boundary conditions (the normal derivative $\\partial f/\\partial n$ vanishes on $\\partial" +"---\nabstract: 'Frenkel exciton population dynamics of an excitonic dimer is studied by comparing results from a quantum master equation (QME) involving rates from second-order perturbative treatment with respect to the excitonic coupling with non-perturbative results from \u201cHierarchical Equations of Motion\u201d (HEOM). By formulating generic Liouville-space expressions for the rates, we can choose to evaluate them either via HEOM propagations or by applying cumulant expansion. The coupling of electronic transitions to bath modes is modeled either as overdamped oscillators for description of thermal bath components or as underdamped oscillators to account for intramolecular vibrations. Cases of initial nonequilibrium and equilibrium vibrations are discussed. In case of HEOM initial equilibration enters via a polaron transformation. Pointing out the differences between the nonequilibrium and equilibrium approach in the context of the projection operator formalism, we identify a further description, where the transfer dynamics is driven only by fluctuations without involvement of dissipation. Despite this approximation, also this approach can yield meaningful results in certain parameter regimes. While for the chosen model HEOM has no technical advantage for evaluation of the rate expressions compared to cumulant expansion, there are situations where only evaluation with HEOM is applicable. For instance, a separation of reference" +"---\nauthor:\n- Ga\u00e9tan Berthe\n- Barnaby Martin\n- Dani\u00ebl Paulusma\n- Siani Smith\ntitle: 'The Complexity of $L(p,q)$-Edge-Labelling'\n---\n\nIntroduction {#s-intro}\n============\n\nThis paper studies a problem that falls under the distance-constrained labelling framework. Given any fixed nonnegative integer values $p$ and $q$, an [*$L(p,q)$-$k$-labelling*]{} is an assignment of [*labels*]{} from $\\{0,\\ldots,k-1\\}$ to the vertices of a graph such that adjacent vertices receive labels that differ by at least $p$, and vertices connected by a path of length\u00a0$2$ receive labels that differ by at least $q$\u00a0[@Ca11]. Some authors instead define the latter condition as being vertices at distance\u00a0$2$ receive labels which differ by at least $q$ (e.g. [@FKK01]). These definitions are the same so long as $p\\geq q$ and much of the literature considers only this case (e.g. [@JKM09]). If $q>p$, the definitions diverge. For example, in an $L(1,2)$-labelling, the vertices of a triangle $K_3$ labels $\\{0,1,2\\}$ in the second definition but $\\{0,2,4\\}$ in the first. We use the [*first*]{} definition, in line with [@Ca11]. The decision problem of testing if for a given integer $k$, a given graph $G$ admits an $L(p,q)$-$k$-labelling is known as [$L(p,q)$-Labelling]{}. If $k$ is [*fixed*]{}, that is, not part of" +"---\nabstract: 'Because of the limits input/output systems currently impose on high-performance computing systems, a new generation of workflows that include online data reduction and analysis is emerging. Diagnosing their performance requires sophisticated performance analysis capabilities due to the complexity of execution patterns and underlying hardware, and no tool could handle the voluminous performance trace data needed to detect potential problems. This work introduces Chimbuko, a performance analysis framework that provides real-time, distributed, *in situ* anomaly detection. Data volumes are reduced for human-level processing without losing necessary details. Chimbuko supports online performance monitoring via a visualization module that presents the overall workflow anomaly distribution, call stacks, and timelines. Chimbuko also supports the capture and reduction of performance provenance. To the best of our knowledge, Chimbuko is the first online, distributed, and scalable workflow-level performance trace analysis framework, and we demonstrate the tool\u2019s usefulness on Oak Ridge National Laboratory\u2019s Summit system.'\nauthor:\n- \n- \n- \n- \n- \nbibliography:\n- 'chimbuko.bib'\ntitle: 'Chimbuko: A Workflow-Level Scalable Performance Trace Analysis Tool [^1]'\n---\n\nPerformance Trace, Benchmark, Profiling, Anomaly Detection, Visualization, Provenance\n\nIntroduction\n============\n\nThe Chimbuko framework captures, analyzes, and visualizes performance metrics for complex scientific workflows at scale. Meanwhile, the TAU performance analysis" +"---\nabstract: 'Personalization should take the human person seriously. This requires a deeper understanding of how recommender systems can shape both our self-understanding and identity. We unpack key European humanistic and philosophical ideas underlying the General Data Protection Regulation (GDPR) and propose a new paradigm of humanistic personalization. Humanistic personalization responds to the IEEE\u2019s call for Ethically Aligned Design (EAD) and is based on fundamental human capacities and values. Humanistic personalization focuses on narrative accuracy: the subjective fit between a person\u2019s self-narrative and both the input (personal data) and output of a recommender system. In doing so, we re-frame the distinction between implicit and explicit data collection as one of nonconscious (\u201corganismic\u201d) behavior and conscious (\u201creflective\u201d) action. This distinction raises important ethical and interpretive issues related to agency, self-understanding, and political participation. Finally, we discuss how an emphasis on narrative accuracy can reduce opportunities for epistemic injustice done to data subjects.'\nauthor:\n- 'Travis Greene & Galit Shmueli'\nbibliography:\n- 'Facct3pgUpdateSept4.bib'\ntitle: |\n Beyond Our Behavior:\\\n The GDPR and Humanistic Personalized Recommendation\n---\n\nIntroduction\n============\n\nMachine learning-backed personalized services, among them recommender systems (RS), have become a permanent fixture in our increasingly digital lives. Personalization relies on vast quantities of" +"---\nabstract: 'Particle track reconstruction is the most computationally intensive process in nuclear physics experiments. Traditional algorithms use a combinatorial approach that exhaustively tests track measurements (\u201chits\u201d) to identify those that form an actual particle trajectory. In this article, we describe the development of four machine learning (ML) models that assist the tracking algorithm by identifying valid track candidates from the measurements in drift chambers. Several types of machine learning models were tested, including: Convolutional Neural Networks (CNN), Multi-Layer Perceptrons (MLP), Extremely Randomized Trees (ERT) and Recurrent Neural Networks (RNN). As a result of this work, an MLP network classifier was implemented as part of the CLAS12 reconstruction software to provide the tracking code with recommended track candidates. The resulting software achieved accuracy of greater than 99% and resulted in an end-to-end speedup of 35% compared to existing algorithms.'\naddress:\n- 'CRTC, Department of Computer Science, Old Dominion University, Norfolk, VA, USA'\n- 'Jefferson Lab, Newport News, VA, USA'\nauthor:\n- Polykarpos Thomadakis\n- Angelos Angelopoulos\n- Gagik Gavalian\n- Nikos Chrisochoides\nbibliography:\n- 'references.bib'\ntitle: |\n Using Machine Learning for Particle Track\\\n Identification in the CLAS12 Detector\n---\n\nIntroduction\n============\n\nIn nuclear physics, experiments measuring scattered particle parameters are" +"---\nabstract: |\n In this paper, a fractional Lotka-Volterra mathematical model for a bioreactor is proposed and used to fit the data provided by a bioprocess known as continuous fermentation of *Zymomonas mobilis*. The model contemplates a time-delay $\\tau$ due to the dead-time in obtaining the measurement of biomass $x(t)$. A Hopf bifurcation analysis is performed to characterize the inherent self oscillatory experimental bioprocess response. As consequence, stability conditions for the equilibrium point together with conditions for limit cycles using the delay $\\tau$ as bifurcation parameter are obtained. Under the assumptions that the use of observers, estimators or extra laboratory measurements are avoided to prevent the rise of computational or monetary costs, for the purpose of control, we will only consider the measurement of the biomass. A simple controller that can be employed is the proportional action controller $u(t)=k_px(t)$, which is shown to fail to stabilize the obtained model under the proposed analysis. Another suitable choice is the use of a delayed controller $u(t)=k_rx(t-h)$ which successfully stabilizes the model even when it is unstable. Finally, the proposed theoretical results are corroborated through numerical simulations.\n\n **Keywords:** bioreactor, bifurcations, time-delay systems, fractional mathematical model.\nauthor:\n- 'R. Villafuerte-Segura'\n- 'B.\u00a0A.\u00a0Itz\u00e1-Ortiz'" +"---\nabstract: 'Experiments on graphene bilayers, where the top layer is rotated with respect to the one below, have displayed insulating behavior when the moir\u00e9 bands are partially filled. We calculate the charge distributions in these phases, and estimate the excitation gaps.'\nauthor:\n- Ipsita Mandal\n- Jia Yao\n- 'Erich J. Mueller'\nbibliography:\n- 'biblio.bib'\ntitle: Correlated Insulators in Twisted Bilayer Graphene\n---\n\nIntroduction\n============\n\nGraphene bilayers, where the top layer is rotated with respect to the bottom, show remarkable properties [@cao1; @cao2; @choi; @zondiner; @sharpe; @jiang; @kerelsky; @yazdani; @kimdasilva; @yankowitz]. These arise from the presence of a long wavelength moir\u00e9 superlattice. For twists near certain \u201cmagic angles\", the low energy bands become very flat, and interactions dominate[@Bistritzer]. Moreover, the bands have non-trivial topological indices. Experimentally one observes insulating phases at certain rational fillings of the bands. Electrostatically doping away from these rational fillings leads to superconducting phases, whose transition temperatures are large compared to the bandwidth [@cao1; @cao2]. An important part of understanding the physics of these systems is to identify the structure of the correlated insulating states. Here we conduct a variational study of the various possible charge-density, spin-density, and valley-density waves, which are the most natural" +"---\nabstract: 'Higher-order repulsive interactions are included in the three-flavor NJL model in order to describe the quark phase of an hybrid star. The effect of 4-quark and 8-quark vector-isoscalar interactions in the stability of hybrid star configurations is analyzed. The presence of a 8-quark vector-isoscalar channel is seen to be crucial in generating large quark branches in the $M(R)$ diagram. This is due to its stiffening effect on the quark matter equation of state which arises from the non-linear density dependence of the speed of sound. This additional interaction channel allows for the appearance of a quark core at moderately low NS masses, $\\sim 1M_{\\odot}$, and provides the required repulsion to preserve the star stability up to $\\sim2.1M_{\\odot}$. Furthermore, we show that both the heaviest NS mass generated, $M_{\\text{max}}$, and its radii, $R_{\\text{max}}$, are quite sensitive to the strength of 8-quark vector-isoscalar channel, leading to a considerable decrease of $R_{\\text{max}}$ as the coupling increases. This behavior imprints a considerable deviation from the purely hadronic matter equation of state in the $\\Lambda(M)$ diagram, which might be a possible signature of the quark matter existence, even for moderately low NS masses, $\\sim 1.4\\, M_\\odot$. The resulting $M(R)$ and $\\Lambda(R)$ relations are" +"---\nabstract: 'Clinical Named Entity Recognition (CNER) aims to automatically identity clinical terminologies in Electronic Health Records (EHRs), which is a fundamental and crucial step for clinical research. To train a high-performance model for CNER, it usually requires a large number of EHRs with high-quality labels. However, labeling EHRs, especially Chinese EHRs, is time-consuming and expensive. One effective solution to this is active learning, where a model asks labelers to annotate data which the model is uncertain of. Conventional active learning assumes a single labeler that always replies noiseless answers to queried labels. However, in real settings, multiple labelers provide diverse quality of annotation with varied costs and labelers with low overall annotation quality can still assign correct labels for some specific instances. In this paper, we propose a Cost-Quality Adaptive Active Learning (CQAAL) approach for CNER in Chinese EHRs, which maintains a balance between the annotation quality, labeling costs, and the informativeness of selected instances. Specifically, CQAAL selects cost-effective instance-labeler pairs to achieve better annotation quality with lower costs in an adaptive manner. Computational results on the CCKS-2017 Task 2 benchmark dataset demonstrate the superiority and effectiveness of the proposed CQAAL.'\nauthor:\n- \nbibliography:\n- 'Bibfiles.bib'\ntitle: 'Cost-Quality Adaptive" +"---\nabstract: 'We discovered an over-density of H$\\alpha$-emitting galaxies associated with a [*Planck*]{} compact source in the COSMOS field (PHz\u00a0G237.0+42.5) through narrow-band imaging observations with Subaru/MOIRCS. This [*Planck*]{}-selected dusty proto-cluster at $z=2.16$ has 38 H$\\alpha$ emitters including six spectroscopically confirmed galaxies in the observed MOIRCS 4$''$$\\times$7$''$ field (corresponding to $\\sim$2.0$\\times$3.5\u00a0Mpc$^2$ in physical scale). We find that massive H$\\alpha$ emitters with $\\log$$(M_{\\star}/M_{\\odot})$$>$10.5 are strongly clustered in the core of the proto-cluster (within $\\sim$300-kpc from the density peak of the H$\\alpha$ emitters). Most of the H$\\alpha$ emitters in this proto-cluster lie along the star-forming main sequence using H$\\alpha$-based SFR estimates, whilst the cluster total SFR derived by integrating the H$\\alpha$-based SFRs is an order of magnitude smaller than those estimated from [*Planck/Herschel*]{} FIR photometry. Our results suggest that H$\\alpha$ is a good observable for detecting moderately star-forming galaxies and tracing the large-scale environment in and around high-$z$ dusty proto-clusters, but there is a possibility that a large fraction of star formation could be obscured by dust and undetected in H$\\alpha$ observations.'\nauthor:\n- |\n Yusei Koyama,$^{1,2}$[^1] Maria del Carmen Polletta,$^{3,4}$ Ichi Tanaka,$^{1}$ Tadayuki Kodama$^{5}$, Herv\u00e9 Dole$^{6}$, Genevi\u00e8ve Soucail$^{4}$, Brenda Frye$^{7}$, Matthew Lehnert$^{8}$, Marco Scodeggio$^{3}$\\\n \\\n $^{1}$Subaru Telescope, National Astronomical Observatory" +"---\nabstract: 'Multiobjective stochastic programming is a field well located to tackle problems arising in emergencies, given that uncertainty and multiple objectives are usually present in such problems. A new concept of solution is proposed in this work, especially designed for risk-aversion solutions. A linear programming model is presented to obtain such solution.'\nauthor:\n- Javier Le\u00f3n\n- Justo Puerto\n- Bego\u00f1a Vitoriano\nbibliography:\n- 'inputs/biblioMSP.bib'\ntitle: 'A risk-aversion approach for the Multiobjective Stochastic Programming problem'\n---\n\n*Keywords*: Multiobjective stochastic programming; Linear programming; Risk aversion\n\nIntroduction\n============\n\nDecision making is never easy, yet we often have to make decisions. Emergencies and disaster management are fields in which many difficulties often arise, such as high uncertainty and multiple conflicting objectives. To overcome such difficulties, risk-aversion decisions are usually sought. Risk-aversion is the attitude for which we prefer to lower uncertainty rather than gambling extreme outcomes (positive or negative).\n\nRisk-aversion, although typically studied in problems with uncertainty, can as well be considered when making decisions with multiple criteria. For instance, in the field of disaster management, solutions that are sufficiently good for all criteria are usually preferred than others that perform exceptionally good for some criteria but inadequately for the others.\n\nMulticriteria" +"---\nabstract: 'Machine learning approaches have recently been applied to the study of various problems in physics. Most of the studies are focused on interpreting the data generated by conventional numerical methods or an existing database. An interesting question is whether it is possible to use a machine learning approach, in particular a neural network, for solving the many-body problem. In this paper, we present a solver for interacting quantum problem for small clusters based on the neural network. We study the small quantum cluster which mimics the single impurity Anderson model. We demonstrate that the neural network based solver provides quantitatively accurate results for the spectral function as compared to the exact diagonalization method. This opens the possibility of utilizing the neural network approach as an impurity solver for other many body numerical approaches, such as dynamical mean field theory.'\nauthor:\n- Nicholas Walker\n- Samuel Kellar\n- Yi Zhang\n- 'Ka-Ming Tam'\nbibliography:\n- 'refs.bib'\ntitle: Neural Network Solver for Small Quantum Clusters \n---\n\nIntroduction\n============\n\nA single quantum impurity is the simplest possible quantum many body problem for which interaction plays a crucial role [@Kondo_1964; @Anderson_1970]. It was invented as a model to describe diluted magnetic impurity" +"---\nabstract: 'Topological constraints (TCs) between polymers determine the behaviour of complex fluids such as creams, oils and plastics. Most of the polymer solutions used every day life employ linear chains; their behaviour is accurately captured by the reptation and tube theories which connect microscopic TCs to macroscopic viscoelasticity. On the other hand, polymers with non-trivial topology, such as rings, hold great promise for new technology but pose a challenging problem as they do not obey standard theories; additionally, topological invariance \u2013 i.e. the fact that rings must remain unknotted and unlinked if prepared so \u2013 precludes any serious analytical treatment. Here we propose an unambiguous, parameter-free algorithm to characterise TCs in polymeric solutions and show its power in characterising TCs of entnagled rings. We analyse large-scale molecular dynamics (MD) simulations via persistent homology, a key mathematical tool to extract robust topological information from large datasets. This method allows us to identify ring-specific TCs which we call \u201chomological threadings\u201d (H-threadings) and to connect them to the polymers\u2019 behaviour. It also allows us to identify, in a physically appealing and unambiguous way, scale-dependent loops which have eluded precise quantification so far. We discover that while threaded neighbours slowly grow with the" +"[**An analysis of transverse momentum spectra of various jets produced in high energy collisions**]{}\n\n.75cm\n\nYang-Ming Tai$^{1,2,}$[[^1]]{}, Pei-Pin Yang$^{1,2,}$[[^2]]{}, Fu-Hu Liu$^{1,2,}$[[^3]]{}\\\n\n*$^1$Institute of Theoretical Physics & State Key Laboratory of Quantum Optics and Quantum Optics Devices,\\\nShanxi University, Taiyuan, Shanxi 030006, People\u2019s Republic of China*\n\n$^2$Collaborative Innovation Center of Extreme Optics, Shanxi University,\\\nTaiyuan, Shanxi 030006, People\u2019s Republic of China\n\n.5cm\n\n[**Abstract:**]{} With the framework of the multi-source thermal model, we analyze the experimental transverse momentum spectra of various jets produced in different collisions at high energies. Two energy sources, a projectile participant quark and a target participant quark, are considered. Each energy source (each participant quark) is assumed to contribute to the transverse momentum distribution to be the TP-like function, i.e. a revised Tsallis\u2013Pareto-type function. The contribution of the two participant quarks to the transverse momentum distribution is then the convolution of two TP-like functions. The model distribution can be used to fit the experimental spectra measured by different collaborations. The related parameters such as the entropy index-related, effective temperature, and revised index are then obtained. The trends of these parameters are useful to understand the characteristic of high energy collisions.\\\n[**Keywords:**]{} Transverse momentum spectra, High-energy jets, TP-like function\\" +"---\nabstract: 'The paper is devoted to the Lie group properties of the one-dimensional Green-Naghdi equations describing the behavior of fluid flow over uneven bottom topography. The bottom topography is incorporated into the Green-Naghdi equations in two ways: in the classical Green-Naghdi form and in the approximated form of the same order. The study is performed in Lagrangian coordinates which allows one to find Lagrangians for the analyzed equations. Complete group classification of both cases of the Green-Naghdi equations with respect to the bottom topography is presented. Applying Noether\u2019s theorem, the obtained Lagrangians and the group classification, conservation laws of the one-dimensional Green-Naghdi equations with uneven bottom topography are obtained. Difference schemes which preserve the symmetries of the original equations and the conservation laws are constructed. Analysis of the developed schemes is given. The schemes are tested numerically on the example of an exact traveling-wave solution.'\naddress:\n- |\n $^a$ Keldysh Institute of Applied Mathematics, Russian Academy of Science,\\\n Miusskaya Pl.\u00a04, Moscow, 125047, Russia;\\\n- |\n $^b$ School of Mathematics, Institute of Science,\\\n Suranaree University of Technology, 30000, Thailand\\\nauthor:\n- 'V.A. DORODNITSYN$^a$, E.I. KAPTSOV$^{a,b}$ and S.V. MELESHKO$^b$, [^1]'\ntitle: |\n Symmetries, conservation laws, invariant solutions and difference schemes\\" +"---\nabstract: 'In this work we analyse the ultimate sensitivity of dark matter direct detection experiments, the \u201cneutrino-floor\", in the presence of anomalous sources of dark radiation in form of SM or semi-sterile neutrinos. This flux-component is assumed to be produced from dark matter decay. Since dark radiation may mimic dark matter signals, we perform our analysis based on likelihood statistics that allows to test the distinguishability between signals and backgrounds. We show that the neutrino floor for xenon-based experiments may be lifted in the presence of extra dark radiation. In addition, we explore the testability of neutrino dark radiation from dark matter decay in direct detection experiments. Given the previous bounds from neutrino experiments, we find that xenon-based dark matter searches will not be able to probe new regions of the dark matter progenitor mass and lifetime parameter space when the decay products are SM neutrinos. In turn, if the decay instead happens to a fourth neutrino species with enhanced interactions to baryons, DR can either constitute the dominant background or a discoverable signal in direct detection experiments.'\nauthor:\n- Marco Nikolic\n- Suchita Kulkarni\n- Josef Pradler\nbibliography:\n- 'Refs.bib'\ntitle: 'The neutrino-floor in the presence of dark" +"---\nabstract: 'With the development of advanced communication technology, connected vehicles become increasingly popular in our transportation systems, which can conduct cooperative maneuvers with each other as well as road entities through vehicle-to-everything communication. A lot of research interests have been drawn to other building blocks of a connected vehicle system, such as communication, planning, and control. However, less research studies were focused on the human-machine cooperation and interface, namely how to visualize the guidance information to the driver as an advanced driver-assistance system (ADAS). In this study, we propose an augmented reality (AR)-based ADAS, which visualizes the guidance information calculated cooperatively by multiple connected vehicles. An unsignalized intersection scenario is adopted as the use case of this system, where the driver can drive the connected vehicle crossing the intersection under the AR guidance, without any full stop at the intersection. A simulation environment is built in Unity game engine based on the road network of San Francisco, and human-in-the-loop (HITL) simulation is conducted to validate the effectiveness of our proposed system regarding travel time and energy consumption.'\nauthor:\n- |\n Ziran Wang, Kyungtae Han, and Prashant Tiwari\\\n Toyota Motor North America R&D, InfoTech Labs, Mountain View, CA, USA\\\n {[[ziran.wang](mailto:ziran.wang@toyota.com)]{}," +"---\nabstract: 'In low Mach number aeroacoustics, the well known disparity of scales makes it possible to apply efficient hybrid simulation models using different meshes for flow and acoustics, which leads to a powerful computational procedure. Our study applies the hybrid workflow to the computationally efficient perturbed convective wave equation with only one scalar unknown, the acoustic velocity potential. The workflow of this aeroacoustic approach is based on three steps: 1. perform unsteady incompressible flow computations on a sub-domain; 2. compute the acoustic sources; 3. simulate the acoustic field using a mesh specifically suited for computational aeroacoustics. In general, hybrid aeroacoustic methods seek for robust and conservative mesh-to-mesh transformation of the aeroacoustic sources while high computational efficiency is ensured. In this paper, we investigate the accuracy of a cell-centroid based conservative interpolation scheme compared to the more accurate cut-volume cell approach and their application to the computation of rotating systems, namely an axial fan. The capability and robustness of the cut-volume cell interpolation in a hybrid workflow on different meshes are investigated by a grid convergence study. The results of the acoustic simulation are in good agreement with measurements thus demonstrating the applicability of the conservative cut-volume cell interpolation to" +"---\nabstract: 'We provide a theoretical analysis by means of the nonperturbative functional renormalization group (NP-FRG) of the corrections to scaling in the critical behavior of the random-field Ising model (RFIM) near the dimension $d_{DR}\\approx 5.1$ that separates a region where the renormalized theory at the fixed point is supersymmetric and critical scaling satisfies the $d\\to d-2$ dimensional reduction property ($d>d_{DR}$) from a region where both supersymmetry and dimensional reduction break down at criticality ($d0.$ We demonstrate that for polynomial and rational functions of that random variable there exist at most finitely many risk critical points. The latter are those special values of the threshold parameter for which rate of change of risk is unbounded as $\\delta$ approaches these threshold values. We characterize candidates for risk critical points as zeroes of either the resultant of a relevant $\\delta-$perturbed polynomial, or of its leading coefficient, or both. Thus the equations that need to be solved are themselves polynomial equations in $\\delta$ that exploit the algebraic properties of the underlying polynomial or rational functions. We name these important equations as \u201chidden equations of threshold risk\u201d.'\naddress:\n- 'College of Science and Engineering, Flinders University, South Australia, Australia'\n- 'Centre for Applications in Natural Resource Mathematics, School of Mathematics and Physics,The University Of Queensland, Queensland, Australia'\nauthor:\n- 'Vladimir V. Ejov'\n- 'Jerzy A. Filar'\n- Zhihao Qiao\nbibliography:\n- 'ref.bib'\ntitle: Hidden Equations of Threshold Risk\n---\n\nIntroduction and Motivation\n===========================\n\nThis paper is motivated by the" +"---\nauthor:\n- 'Ryu Makiya,'\n- 'Issha Kayo,'\n- Eiichiro Komatsu\nbibliography:\n- 'main.bib'\ntitle: 'Ray-tracing log-normal simulation for weak gravitational lensing: application to the cross-correlation with galaxies'\n---\n\nIntroduction\n============\n\nThe large-scale structure (LSS) of the universe is a powerful tool for cosmology [@peebles:1980]. It has been intensively studied using various probes such as galaxy clustering and weak gravitational lensing shear fields. See refs.\u00a0[@alam/etal:2017; @alam/etal:2020; @troxel/etal:2018; @hikage/etal:2019; @hildebrandt/etal:2020; @heymans/etal:2020] for recent measurements.\n\nThe galaxy clustering in redshift space, mainly measured from spectroscopic galaxy samples, offers a probe of the expansion history of the universe as well as the growth rate of the structure through the baryon acoustic oscillations [@eisenstein/etal:2005; @cole/etal:2005] and the redshift space distortion (RSD) [@jackson:1972; @sargent/turner:1977; @kaiser:1987]. A key ingredient in the analysis of the galaxy clustering is a galaxy bias (see [@desjacques/etal:2018] for a review), which relates the clustering amplitude of galaxies to the underlying dark matter density fields. The galaxy bias is usually treated as nuisance parameters, which limit the constraining power of the galaxy clustering on cosmological parameters.\n\nThe cosmological weak gravitational lensing effect is a magnification and coherent distortion of galaxy images induced by the intervening matter density field [@schneider/ehlers/falco:1992]. Unlike the" +"---\nabstract: 'A terrestrial robot that can maneuver rough terrain and scout places is very useful in mapping out unknown areas. It can also be used explore dangerous areas in place of humans. A terrestrial robot modeled after a scorpion will be able to traverse undetected and can be used for surveillance purposes. Therefore, this paper proposes modelling of a scorpion inspired robot and a reinforcement learning (RL) based controller for navigation. The robot scorpion uses serial four bar mechanisms for the legs movements. It also has an active tail and a movable claw. The controller is trained to navigate the robot scorpion to the target waypoint. The simulation results demonstrate efficient navigation of the robot scorpion.'\nauthor:\n- \n- \n- \n- \nbibliography:\n- 'IEEEexample.bib'\ntitle: |\n Control of a Nature-inspired Scorpion using Reinforcement Learning\\\n [^1][^2]\n---\n\nrobot scorpion, reinforcement learning, nature-inspired, navigation\n\nIntroduction\n============\n\nRobotic scorpions can be used for exploring various planetary terrains hazardous for humans [@scoting_leg_rob]. The tail of a scorpion can adjust to balance out the scorpion when walking on uneven terrain. By using a combination of various tail angles, the scorpion robot and cross uneven and steep terrain with minimal effort. A robotic scorpion using" +"---\nabstract: 'The ROMS modeling system was applied to the California Upwelling System (CalUS) to understand the key hydrodynamic conditions and dynamics of the nitrogen-based ecosystem using the NPZD model proposed by @Powell_2006. A new type of sponge layer has been successfully implemented in the ROMS modelling system in order to stabilize the hydrodynamic part of the modeling system when using so-called \u201creduced\u201d boundary conditions. The hydrodynamic performance of the model was examined using a tidal analysis based on tidal measurement data, a comparison of the modeled sea surface temperature (SST) with buoy and satellite data, and vertical sections of the currents along the coast and the water temperature. This validation process shows that the hydrodynamic module used in this study can reproduce the basic hydrodynamic and circulation characteristics within the CalUS. The results of the ecosystem model show the characteristic features of upwelling regions as well as the well-known spotty horizontal structures of the zooplankton community. The model thus provides a solid basis for the hydrodynamic and ecological characteristics of the CalUS and enables the ecological model to be expanded into a complex ecological model for investigating the effects of climate change on the ecological balance in the area" +"---\nabstract: 'Many research questions involve time-to-event outcomes that can be prevented from occurring due to competing events. In these settings, we must be careful about the causal interpretation of classical statistical estimands. In particular, estimands on the hazard scale, such as ratios of cause specific or subdistribution hazards, are fundamentally hard to be interpret causally. Estimands on the risk scale, such as contrasts of cumulative incidence functions, do have a causal interpretation, but they only capture the total effect of the treatment on the event of interest; that is, effects both through and outside of the competing event. To disentangle causal treatment effects on the event of interest and competing events, the separable direct and indirect effects were recently introduced. Here we provide new results on the estimation of direct and indirect separable effects in continuous time. In particular, we derive the nonparametric influence function in continuous time and use it to construct an estimator that has certain robustness properties. We also propose a simple estimator based on semiparametric models for the two cause specific hazard functions. We describe the asymptotic properties of these estimators, and present results from simulation studies, suggesting that the estimators behave satisfactorily in finite" +"---\nabstract: 'The use of complex networks for time series analysis has recently shown to be useful as a tool for detecting dynamic state changes for a wide variety of applications. In this work, we implement the commonly used ordinal partition network to transform a time series into a network for detecting these state changes for the simple magnetic pendulum. The time series that we used are obtained experimentally from a base-excited magnetic pendulum apparatus, and numerically from the corresponding governing equations. The magnetic pendulum provides a relatively simple, non-linear example demonstrating transitions from periodic to chaotic motion with the variation of system parameters. For our method, we implement persistent homology, a shape measuring tool from Topological Data Analysis (TDA), to summarize the shape of the resulting ordinal partition networks as a tool for detecting state changes. We show that this network analysis tool provides a clear distinction between periodic and chaotic time series. Another contribution of this work is the successful application of the networks-TDA pipeline, for the first time, to signals from non-autonomous nonlinear systems. This opens the door for our approach to be used as an automatic design tool for studying the effect of design parameters on" +"---\nabstract: 'There is a growing interest from both the academia and industry to employ distributed ledger technology in the Internet-of-Things domain for addressing security-related and performance challenges. Distributed ledger technology enables non-trusted entities to communicate and reach consensus in a fully distributed manner through a cryptographically secure and immutable ledger. However, significant challenges arise mainly related to transaction processing speed and user privacy. This work explores the interplay between Internet-of-Things and distributed ledger technology, analysing the fundamental characteristics of this technology and discussing the related benefits and challenges.'\naddress: |\n Institute of Computer Science\\\n Foundation for Research & Technology - Hellas\\\n Heraklion, Crete, Greece\nauthor:\n- Pavlos Charalampidis\n- Alexandros Fragkiadakis\nbibliography:\n- 'refs.bib'\ntitle: 'When Distributed Ledger Technology meets Internet of Things - Benefits and Challenges'\n---\n\nInternet-of-Things ,Distributed Ledger Technology ,Blockchain ,Consensus algorithms ,Smart Contracts ,Security ,Privacy\n\nIntroduction\n============\n\nInternet-of-Things (IoT) technologies and subsequent applications are remarkably increasing continuously, providing solutions in several areas like in industry, healthcare, agriculture, etc. This rapid proliferation has at the same time created challenges related to performance, security and privacy. As IoT networks are mainly based on severe resource constrained devices (sensors) in terms of memory, processing and storage, strong cryptographic" +"---\nabstract: '[We determine the optimal method of discriminating and comparing quantum states from a certain class of multimode Gaussian states and their mixtures when arbitrary global Gaussian operations and general Gaussian measurements are allowed. We consider the so-called constant-$\\hat{p}$ displaced states which include mixtures of multimode coherent states arbitrarily displaced along a common axis. We first show that no global or local Gaussian transformations or generalized Gaussian measurements can lead to a better discrimination method than simple homodyne measurements applied to each mode separately and classical postprocessing of the results. This result is applied to binary state comparison problems. We show that homodyne measurements, separately performed on each mode, are the best Gaussian measurement for binary state comparison. We further compare the performance of the optimal Gaussian strategy for binary coherent states comparison with these of non-Gaussian strategies using photon detections. ]{}'\nauthor:\n- 'David E.\u00a0Roberson'\n- Shuro Izumi\n- Wojciech Roga\n- 'Jonas S. Neergaard-Nielsen'\n- Masahiro Takeoka\n- 'Ulrik L. Andersen'\ntitle: 'Limit of Gaussian operations and measurements for Gaussian state discrimination, and its application to state comparison'\n---\n\n[ ]{}\n\n[ ]{}\n\n[ ]{}\n\n[ ]{}\n\nIntroduction {#Sect:1}\n============\n\nQuantum state discrimination is the task" +"---\nabstract: '[One of the primary aims of upcoming space-borne gravitational wave detectors is to measure radiation in the mHz range from extreme-mass-ratio inspirals. Such a detection would place strong constraints on hypothetical departures from a Kerr description for astrophysically stable black holes. The Kerr geometry, which is unique in general relativity, admits a higher-order symmetry in the form of a Carter constant, which implies that the equations of motion describing test particle motion in a Kerr background are Liouville-integrable. In this article, we investigate whether the Carter symmetry itself is discernible from a generic deformation of the Kerr metric in the gravitational waveforms for such inspirals. We build on previous studies by constructing a new metric which respects current observational constraints, describes a black hole, and contains two non-Kerr parameters, one of which controls the presence or absence of the Carter symmetry, thereby controlling the existence of chaotic orbits, and another which serves as a generic deformation parameter. We find that these two parameters introduce fundamentally distinct features into the orbital dynamics, and evince themselves in the gravitational waveforms through a significant dephasing. Although only explored in the quadrupole approximation, this, together with a Fisher metric analysis, suggests that" +"---\nabstract: 'Detection mechanisms for low mass bosonic dark matter candidates, such the axion or hidden photon, leverage potential interactions with electromagnetic fields, whereby the dark matter (of unknown mass) on rare occasion converts into a single photon. Current dark matter searches operating at microwave frequencies use a resonant cavity to coherently accumulate the field sourced by the dark matter and a near standard quantum limited (SQL) linear amplifier to read out the cavity signal. To further increase sensitivity to the dark matter signal, sub-SQL detection techniques are required. Here we report the development of a novel microwave photon counting technique and a new exclusion limit on hidden photon dark matter. We operate a superconducting qubit to make repeated quantum non-demolition measurements of cavity photons and apply a hidden Markov model analysis to reduce the noise to $\\SI{15.7} {\\dB}$ below the quantum limit, with overall detector performance limited by a residual background of real photons. With the present device, we perform a hidden photon search and constrain the kinetic mixing angle to $\\epsilon \\leq 1.68 \\times 10^{-15}$ in a band around $\\SI{6.011} {\\giga \\hertz}$ ($ \\SI{24.86} {\\micro \\electronvolt}$) with an integration time of $\\SI{8.33} {\\second}$. This demonstrated noise reduction technique" +"---\nabstract: 'Modeling the effects of mutations on the binding affinity plays a crucial role in protein engineering and drug design. In this study, we develop a novel deep learning based framework, named GraphPPI, to predict the binding affinity changes upon mutations based on the features provided by a graph neural network (GNN). In particular, GraphPPI first employs a well-designed pre-training scheme to enforce the GNN to capture the features that are predictive of the effects of mutations on binding affinity in an unsupervised manner and then integrates these graphical features with gradient-boosting trees to perform the prediction. Experiments showed that, without any annotated signals, GraphPPI can capture meaningful patterns of the protein structures. Also, GraphPPI achieved new state-of-the-art performance in predicting the binding affinity changes upon both single- and multi-point mutations on five benchmark datasets. In-depth analyses also showed GraphPPI can accurately estimate the effects of mutations on the binding affinity between SARS-CoV-2 and its neutralizing antibodies. These results have established GraphPPI as a powerful and useful computational tool in the studies of protein design.'\nauthor:\n- 'Xianggen Liu$^{1,2,3}$, Yunan Luo$^{1}$, Sen Song$^{2,3}$ and Jian Peng$^{1}$'\nbibliography:\n- 'main.bib'\ntitle:\n- '**[Pre-training of Graph Neural Network for Modeling Effects" +"=1\n\nIntroduction\n============\n\nIn the seminal work [@Gromov78b], Gromov discovered a gap phenomenon for the sectional curvature (denoted by $\\mathbf{K}_g$ for a smooth Riemannian metric\u00a0$g$) to detect the infranil manifold structure (see also [@BK81; @Ruh82]):\n\n\\[thm: Gromov78\\] There is a dimensional constant $\\varepsilon(m)\\in (0,1)$ such that if a closed $m$-dimensional Riemannian manifold $(M,g)$ satisfies $$\\begin{gathered}\n\\label{eqn: AF_Rm}\n\\operatorname{diam}(M,g)^2\\max_{\\wedge^2TM} |\\mathbf{K}_{g} | \\le \\varepsilon^2 ,\\end{gathered}$$ then $M$ is diffeomorphic to an infranil manifold.\n\nHere we say that $M$ is an *infranil manifold* if on the universal covering $\\tilde{M}$ of\u00a0$M$ there is a flat connection with parallel torsion, defining a\u00a0simply connected nilpotent Lie group structure\u00a0$N$ on\u00a0$\\tilde{M}$ such that $\\pi_1(M)$ is a sub-group of $N\\rtimes \\operatorname{Aut}(N)$ with $ [\\pi_1(M)\\colon \\pi_1(M)\\cap N ]<\\infty$ and ${\\operatorname{rank}}\\pi_1(M)=m$\u00a0\u2013 in the case of Gromov\u2019s almost flat manifold theorem, it is also shown that such index has a uniform dimensional upper bound $C(m)$.\n\nEver since its birth, Gromov\u2019s almost flat manifold theorem has inspired the research of Riemannian geometers by two themes of generalizations. One theme is to find parametrized versions of Theorem\u00a0\\[thm: Gromov78\\], as indicated by Fukaya\u2019s fiber bundle theorem\u00a0[@Fukaya87ld]: if a Riemannian manifold with bounded diameter and sectional curvature is" +"---\nabstract: 'Label free tracking of small bio-particles such as proteins or viruses is of great utility in the study of biological processes, however such experiments are frequently hindered by weak signal strengths and a susceptibility to scattering impurities. To overcome these problems we here propose a novel technique leveraging the enhanced sensitivity of both interferometric detection and the strong field confinement of surface plasmons. Specifically, we show that interference between the field scattered by an analyte particle and a speckle reference field, derived from random scattering of surface plasmons propagating on a rough metal film, enables particle tracking with sub-wavelength accuracy. We present the analytic framework of our technique and verify its robustness to noise through Monte Carlo simulations.'\nauthor:\n- 'Joel Berk, Carl Paterson and Matthew R. Foreman [^1] [^2]'\ntitle: Tracking Single Particles using Surface Plasmon Leakage Radiation Speckle\n---\n\nIntroduction\n============\n\nand tracking small biological particles, such as viruses or proteins, has played an important part in enabling advances in our understanding of biological processes at the microscopic and nanoscopic level [@McDonald2018VisualizingSensitivity; @Kukura2009High-speedVirus; @Giepmans2006TheFunction]. Such studies require an ability to detect, monitor and analyse processes dynamically without the usual ensemble averaging inherent in many techniques. Optical" +"---\nabstract: 'Soft particles such as microgels can undergo significant and anisotropic deformations when adsorbed to a liquid interface. This, in turn, leads to a complex phase behavior upon compression. To date, experimental efforts have predominantly provided phenomenological links between microgel structure and resulting interfacial behavior, while simulations have not been entirely successful in reproducing experiments or predicting the minimal requirements for a desired phase behavior. Here we develop a multiscale framework to rationally link the molecular particle architecture to the resulting interfacial morphology and, ultimately, to the collective interfacial phase behavior. To this end, we investigate interfacial morphologies of different poly(N-isopropylacrylamide) particle systems using phase contrast atomic force microscopy and correlate the distinct interfacial morphology with their bulk molecular architecture. We subsequently introduce a new coarse-grained simulation method that uses augmented potentials to translate this interfacial morphology into the resulting phase behavior upon compression. The main novelty in this method is the possibility to efficiently encode multibody interactions, the effects of which are key in distinguishing between heterostructural (anisotropic collapse) and isostructural (isotropic collapse) phase transitions. Our unifying approach allows us to resolve existing discrepancies between experiments and simulations. Notably, we demonstrate the first accurate in silico account of" +"---\nabstract: 'Dissipative solitons are self-localised structures that can persist indefinitely in \u201copen\u201d systems characterised by continual exchange of energy and/or matter with the environment. They play a key role in photonics, underpinning technologies from mode-locked lasers to microresonator optical frequency combs. Here we report on the first experimental observations of spontaneous symmetry breaking of dissipative optical solitons. Our experiments are performed in a passive, coherently driven nonlinear optical ring resonator, where dissipative solitons arise in the form of persisting pulses of light known as Kerr cavity solitons. We engineer balance between two orthogonal polarization modes of the resonator, and show that despite perfectly symmetric operating conditions, the solitons supported by the system can spontaneously break their symmetry, giving rise to two distinct but co-existing vectorial solitons with mirror-like, asymmetric polarization states. We also show that judiciously applied perturbations allow for deterministic switching between the two symmetry-broken dissipative soliton states, thus enabling all-optical manipulation of topological bit sequences. Our experimental observations are in excellent agreement with numerical simulations and theoretical analyses. Besides delivering fundamental insights at the intersection of multi-mode nonlinear optical resonators, dissipative structures, and spontaneous symmetry breaking, our work provides new avenues for the storage, coding, and manipulation" +"---\nauthor:\n- 'Ming-Yueh Huang[^1]$\\ $ and Shu Yang[^2]'\nbibliography:\n- 'MYH.bib'\ntitle: '**Robust inference of conditional average treatment effects using dimension reduction** '\n---\n\n**Abstract**: It is important to make robust inference of the conditional average treatment effect from observational data, but this becomes challenging when the confounder is multivariate or high-dimensional. In this article, we propose a double dimension reduction method, which reduces the curse of dimensionality as much as possible while keeping the nonparametric merit. We identify the central mean subspace of the conditional average treatment effect using dimension reduction. A nonparametric regression with prior dimension reduction is also used to impute counterfactual outcomes. This step helps improve the stability of the imputation and leads to a better estimator than existing methods. We then propose an effective bootstrapping procedure without bootstrapping the estimated central mean subspace to make valid inference.\n\n*Key words*: augmented inverse probability weighting; matching; kernel smoothing; U-statistic; weighted bootstrap.\n\nIntroduction\\[sec:intro\\]\n=========================\n\nIn recent biomedical and public health research, there has been growing interest in developing valid and robust inference methods for the conditional average treatment effect, which is also known as the treatment contrast or heterogeneity of treatment effect. In particular, the sign of" +"---\nabstract: 'Whereas a very large number of sensors are available in the automotive field, currently just a few of them, mostly proprioceptive ones, are used in telematics, automotive insurance, and mobility safety research. In this paper, we show that exteroceptive sensors, like microphones or cameras, could replace proprioceptive ones in many fields. Our main motivation is to provide the reader with alternative ideas for the development of telematics applications when proprioceptive sensors are unusable for technological issues, privacy concerns, or lack of availability in commercial devices. We first introduce a taxonomy of sensors in telematics. Then, we review in detail all exteroceptive sensors of some interest for vehicle telematics, highlighting advantages, drawbacks, and availability in off-the-shelf devices. Successively, we present a list of notable telematics services and applications in research and industry like driving profiling or vehicular safety. For each of them, we report the most recent and important works relying on exteroceptive sensors, as long as the available datasets. We conclude showing open challenges using exteroceptive sensors both for industry and research.'\nauthor:\n- 'Fernando\u00a0Molano\u00a0Ortiz, Matteo\u00a0Sammarco, Lu\u00eds\u00a0Henrique\u00a0M.\u00a0K.\u00a0Costa,\u00a0 and\u00a0Marcin\u00a0Detyniecki[^1][^2]'\nbibliography:\n- 'exteroceptive.bib'\ntitle: |\n Vehicle Telematics Via Exteroceptive Sensors:\\\n A" +"Introduction\n============\n\nThe goal is to maximize a concave function of $K > 1$ variables. There are $K$ agents and each agent observes the values of the function, corrupted by observation noise, and adjusts his own variable without knowing the values of the other variables. The agents do not communicate their variables. This formulation is motivated by many applications where the agents do not know each other or are not be able to communicate directly with one another. Moreover, the agents are not synchronized, so that they update their variable either at the same or different times.\n\nEach agent [*experiments*]{} by perturbing his variable by a zero-mean change in order to estimate the partial derivative of the function with respect to that variable. He then [*updates*]{} his variable in proportion to the estimate of the partial derivative.\n\nThis algorithm is an extension of [@KW52] and [@S92]. In [@KW52], the authors introduce a gradient descent algorithm where the gradient is estimated by observing the function at perturbed values of its variable and they prove the convergence of the algorithm to the minimum of the function. [@S92] proposes a variation of the algorithm in the multivariate case where the partial derivatives with" +"---\nabstract: 'The novel coronavirus disease 2019 (COVID-19) began in Wuhan, China in late 2019 and to date has infected over 14M people worldwide, resulting in over 750,000 deaths[^1]. On March 10, 2020 the World Health Organization (WHO) declared the outbreak a global pandemic. Many academics and researchers, not restricted to the medical domain, began publishing papers describing new discoveries. However, with the large influx of publications, it was hard for these individuals to sift through the large amount of data and make sense of the findings. The White House and a group of industry research labs, lead by the Allen Institute for AI, aggregated over 200,000 journal articles related to a variety of coronaviruses and tasked the community with answering key questions related to the corpus, releasing the dataset as CORD-19. The information retrieval (IR) community repurposed the journal articles within CORD-19 to more closely resemble a classic TREC-style competition, dubbed TREC-COVID, with human annotators providing relevancy judgements at the end of each round of competition. Seeing the related endeavors, we set out to repurpose the relevancy annotations for TREC-COVID tasks to identify journal articles in CORD-19 which are relevant to the key questions posed by CORD-19. A BioBERT" +"---\nabstract: 'Charged black holes in anti-de Sitter space become unstable to forming charged scalar hair at low temperatures $T < T_\\text{c}$. This phenomenon is a holographic realization of superconductivity. We look inside the horizon of these holographic superconductors and find intricate dynamical behavior. The spacetime ends at a spacelike Kasner singularity, and there is no Cauchy horizon. Before reaching the singularity, there are several intermediate regimes which we study both analytically and numerically. These include strong Josephson oscillations in the condensate and possible \u2018Kasner inversions\u2019 in which after many e-folds of expansion, the Einstein-Rosen bridge contracts towards the singularity. Due to the Josephson oscillations, the number of Kasner inversions depends very sensitively on $T$, and diverges at a discrete set of temperatures $\\{T_n\\}$ that accumulate at $T_c$. Near these $T_n$, the final Kasner exponent exhibits fractal-like behavior.'\nauthor:\n- |\n Sean A. Hartnoll$^1$, Gary T. Horowitz$^2$, Jorrit Kruthoff$^1$ and Jorge E. Santos$^{3,4}$\\\n [*$^1$ Department of Physics, Stanford University, Stanford, CA 94305-4060, USA*]{}\\\n [*$^2$ Department of Physics, University of California, Santa Barbara, CA 93106, USA*]{}\\\n [*$^3$ DAMTP, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, UK*]{}\\\n [*$^4$ Institute for Advanced Study, Princeton, NJ 08540, USA*]{}\nbibliography:\n- 'references.bib'\ntitle: Diving" +"---\nabstract: |\n **-Background.** Network neuroscience examines the brain as a complex system represented by a network (or connectome), providing deeper insights into the brain morphology and function, allowing the identification of atypical brain connectivity alterations, which can be used as diagnostic markers of neurological disorders.\n\n **-Existing Methods.** Graph embedding methods which map data samples (e.g., brain networks) into a low dimensional space have been widely used to explore the relationship between samples for classification or prediction tasks. However, the majority of these works are based on modeling the pair-wise relationships between samples, failing to capture their higher-order relationships.\n\n **-New Method.** In this paper, inspired by the nascent field of geometric deep learning, we propose Hypergraph U-Net (HUNet), a novel data embedding framework leveraging the hypergraph structure to learn low-dimensional embeddings of data samples while capturing their high-order relationships. Specifically, we generalize the U-Net architecture, naturally operating on graphs, to hypergraphs by improving local feature aggregation and preserving the high-order relationships present in the data.\n\n **-Results.** We tested our method on small-scale and large-scale heterogeneous brain connectomic datasets including morphological and functional brain networks of autistic and demented patients, respectively.\n\n **-Conclusion.** Our HUNet outperformed state-of-the art geometric graph and hypergraph" +"---\nabstract: 'We formulate the chiral vortical effect (CVE) and its generalization called generalized vortical effect using the semiclassical theory of wave packet dynamics. We take the spin-vorticity coupling into account and calculate the transport charge current by subtracting the magnetization one from the Noether local one. We find that the transport charge current in the CVE always vanishes in relativistic chiral fermions. This result implies that it cannot be observed in transport experiments in condensed matter systems such as Dirac/Weyl semimetals with the pseudo-Lorentz symmetry. We also demonstrate that the anisotropic CVE can be observed in nonrelativistic systems that belong to the point groups $D_n, C_n (n = 2, 3, 4, 6)$, and $C_1$, such as $n$-type tellurium.'\nauthor:\n- Atsuo Shitade\n- Kazuya Mameda\n- Tomoya Hayata\ntitle: Chiral vortical effect in relativistic and nonrelativistic systems\n---\n\nIntroduction {#sec:introduction}\n============\n\nQuantum anomalies play important roles in high energy and condensed matter physics. In a relativistic system of chiral fermions, the chiral symmetry in the classical action breaks down when the theory is quantized in the presence of electromagnetic fields, which is known as a chiral anomaly\u00a0[@PhysRev.177.2426; @Bell1969]. Such a system is realized in quark-gluon plasmas in heavy-ion" +"---\nauthor:\n- Hannah Kwak\n- Jongchul Chae\n- 'Maria S. Madjarska'\n- Kyuhyoun Cho\n- Donguk Song\nbibliography:\n- 'ms.bib'\ndate: 'Received date, accepted date'\ntitle: Impulsive wave excitation by rapidly changing granules\n---\n\n=1\n\nIntroduction {#intro}\n============\n\nA wide range of observations have revealed that oscillations and waves are abundant in the solar atmosphere. They are clearly observed in the umbral and penumbral regions of sunspots [e.g., @bec72] and the network and internetwork regions of the quiet Sun [e.g., @orr66; @deub90]. It is not yet known how waves are generated in the solar atmosphere. Theoretical studies suggest that solar acoustic waves can be produced by impulsive disturbances in a gravitationally stratified medium [@kal94; @chae15]. @chae15 report that when a region is disturbed by an impulsive event, acoustic waves with an acoustic cutoff frequency naturally arise in a medium. In terms of global p-mode oscillations, it is now generally accepted that p-modes are excited by turbulent convection [@gol90]. @nig99 found that a wave excitation source is located at a depth of 75$\\pm$25km below the photosphere by comparing theoretical and observed p-mode power spectra. Since turbulent convection occurs ubiquitously, the observed oscillations show the superposition of oscillation signals coming from" +"---\nauthor:\n- Alzayat Saleh\n- 'Issam H. Laradji'\n- 'Dmitry A. Konovalov'\n- Michael Bradley\n- David Vazquez\n- Marcus Sheaves\nbibliography:\n- 'references.bib'\ntitle: 'A Realistic Fish-Habitat Dataset to Evaluate Algorithms for Underwater Visual Analysis'\n---\n\nIntroduction\n============\n\n![**A comparison of fish datasets.** (a) QUT\u00a0[@anantharajah2014local], (b) Fish4Knowledge\u00a0[@f4kFinalReport], (c) Rockfish\u00a0[@Rockfish2013], and (d) our proposed dataset DeepFish. (a-c) datasets are acquired from constrained environments, whereas *DeepFish* has more realistic and challenging environments. (Figures a-c were obtained from the open-source datasets [@anantharajah2014local; @f4kFinalReport; @Rockfish2013] )[]{data-label=\"qutf4k\"}](./figures/qutf4k.png){width=\"99.00000%\"}\n\nDataset\n=======\n\nOur goal is to design a benchmark that can enable significant progress in fish habitat understanding. Thus we carefully look into the quality of data acquisition, preparation, and annotation protocol.\n\nAccordingly, we start with the dataset \u00a0[@Bradley2019] as it consists of a large number of images (around 40 thousand) that capture high variability of underwater fish habitats. The dataset\u2019s diversity and size makes it suitable for training and evaluating deep learning methods. However, the dataset\u2019s original purpose was not to evaluate machine learning methods. It was to examine the interactive structuring effects of local habitat characteristics and environmental context on assemblage composition of juvenile fish.\n\nYet the characteristics of the dataset" +"---\nabstract: 'We study the set ${\\mathscr L}(G)$ of lengths of all cycles that appear in a random $d$-regular $G$ on $n$ vertices for a fixed $d\\geq 3$, as well as in Erd\u0151s\u2013R\u00e9nyi random graphs on $n$ vertices with a fixed average degree $c>1$. Fundamental results on the distribution of cycle counts in these models were established in the 1980\u2019s and early 1990\u2019s, with a focus on the extreme lengths: cycles of fixed length, and cycles of length linear in $n$. Here we derive, for a random $d$-regular graph, the limiting probability that ${\\mathscr L}(G)$ simultaneously contains the entire range $\\{\\ell,\\ldots,n\\}$ for $\\ell\\geq 3$, as an explicit expression $\\theta_\\ell=\\theta_\\ell(d)\\in(0,1)$ which goes to $1$ as $\\ell\\to\\infty$. For the random graph ${\\mathcal G}(n,p)$ with $p=c/n$, where $c\\geq C_0$ for some absolute constant $C_0$, we show the analogous result for the range $\\{\\ell,\\ldots,(1-o(1)){L_{\\max}}(G)\\}$, where ${L_{\\max}}$ is the length of a longest cycle in $G$. The limiting probability for ${\\mathcal G}(n,p)$ coincides with $\\theta_\\ell$ from the $d$-regular case when $c$ is the integer $d-1$. In\u00a0addition, for the directed random graph ${\\mathcal D}(n,p)$ we show results analogous to those on ${\\mathcal G}(n,p)$, and for both models we find an interval of $c {\\varepsilon}^2 n$" +"---\nabstract: |\n According to recent empirical studies, a majority of users have the same, or very similar, passwords across multiple password-secured online services. This practice can have disastrous consequences, as one password being compromised puts all the other accounts at much higher risk. Generally, an adversary may use any side-information he/she possesses about the user, be it demographic information, password reuse on a previously compromised account, or any other relevant information to devise a better brute-force strategy (so called targeted attack).\n\n In this work, we consider a distributed brute-force attack scenario in which $m$ adversaries, each observing some side information, attempt breaching a password secured system. We compare two strategies: an uncoordinated attack in which the adversaries query the system based on their own side-information until they find the correct password, and a fully coordinated attack in which the adversaries pool their side-information and query the system together. For passwords $\\mathbf{X}$ of length $n$, generated independently and identically from a distribution $P_X$, we establish an asymptotic closed-form expression for the uncoordinated and coordinated strategies when the side-information $\\mathbf{Y}_{(m)}$ are generated independently from passing $\\mathbf{X}$ through a memoryless channel $P_{Y|X}$, as the length of the password $n$ goes to infinity." +"---\nabstract: 'In classification problems, the purpose of feature selection is to identify a small, highly discriminative subset of the original feature set. In many applications, the dataset may have thousands of features and only a few dozens of samples (sometimes termed \u2018wide\u2019). This study is a cautionary tale demonstrating why feature selection in such cases may lead to undesirable results. In view to highlight the sample size issue, we derive the required sample size for declaring two features different. Using an example, we illustrate the heavy dependency between feature set and classifier, which poses a question to classifier-agnostic feature selection methods. However, the choice of a good selector-classifier pair is hampered by the low correlation between estimated and true error rate, as illustrated by another example. While previous studies raising similar issues validate their message with mostly synthetic data, here we carried out an experiment with 20 real datasets. We created an exaggerated scenario whereby we cut a very small portion of the data (10 instances per class) for feature selection and used the rest of the data for testing. The results reinforce the caution and suggest that it may be better to refrain from feature selection from very" +"---\nabstract: 'Wide area networking infrastructures (WANs), particularly science and research WANs, are the backbone for moving large volumes of scientific data between experimental facilities and data centers. With demands growing at exponential rates, these networks are struggling to cope with large data volumes, real-time responses, and overall network performance. Network operators are increasingly looking for innovative ways to manage the limited underlying network resources. Forecasting network traffic is a critical capability for proactive resource management, congestion mitigation, and dedicated transfer provisioning. To this end, we propose a nonautoregressive graph-based neural network for multistep network traffic forecasting. Specifically, we develop a dynamic variant of diffusion convolutional recurrent neural networks to forecast traffic in research WANs. We evaluate the efficacy of our approach on real traffic from ESnet, the U.S. Department of Energy\u2019s dedicated science network. Our results show that compared to classical forecasting methods, our approach explicitly learns the dynamic nature of spatiotemporal traffic patterns, showing significant improvements in forecasting accuracy. Our technique can surpass existing statistical and deep learning approaches by achieving $\\approx$20% mean absolute percentage error for multiple hours of forecasts despite dynamic network traffic settings.'\nauthor:\n- \n- \n- \n- \nbibliography:\n- 'reference.bib'\ntitle: |\n Dynamic Graph" +"---\nabstract: 'The characteristics of field electron and ion emission change when the space charge formed by the emitted charge is sufficient to suppress the extracting electric field. This phenomenon is well described for planar emitting diodes by the one dimensional (1D) theory. Here we generalize for any 3D geometry by deriving the scaling laws describing the field suppression in the weak space charge regime. We propose a novel corrected equivalent planar diode model, which describes the space charge effects for any geometry in terms of the 1D theory, utilizing a correction factor that adjusts the diode\u2019s scaling characteristics. We then develop a computational method, based on the Particle-In-Cell technique, which solves numerically the space charge problem. We validate our theory by comparing it to both our numerical calculations and existing experimental data, either of which can be used to obtain the geometrical correction factor of the corrected equivalent planar diode model.'\nauthor:\n- 'A. Kyritsakis'\n- 'M. Veske'\n- 'F. Djurabekova'\nbibliography:\n- 'bibliography/bibliography.bib'\ntitle: General scaling laws of space charge effects in field emission\n---\n\nIntroduction\n============\n\nThe current extracted from an electron emitting cathode or an ion emitting anode can be increased by applying higher electric fields," +"---\nabstract: 'A frequently encountered source of systematic error in quantum computations is imperfections in the control pulses which are the classical fields that control qubit gate operations. From an analysis of the quantum-mechanical time-evolution operator of the spin wavefunction, it has been demonstrated that *composite* pulses can mitigate certain systematic errors and an appealing geometric interpretation was developed for the design of error-suppressing composite pulses. Here we show that these same pulse sequences can be obtained within a quasi-classical framework. This raises the question of whether error-correction procedures exist that exploit entanglement in a manner that can not be reproduced in the quasi-classical formulation.'\nauthor:\n- Qile David Su\nbibliography:\n- 'References.bib'\ntitle: 'Quasi-Classical Rules for Qubit Spin-Rotation Error Suppression.'\n---\n\nIntroduction\n============\n\nAn elementary single-qubit quantum gate is the X gate, which is one possible quantum analog of a classical NOT gate. This is implemented by applying radiation resonant with the qubit transition with a pulse area of $\\pi$, commonly known as a $\\pi$-pulse. During such a pulse, the Bloch vector, a geometrical representation of the state of the qubit, rotates by an angle of $\\pi$. In practice, imperfect control of pulse amplitude, duration, frequency, and phase leads" +"---\nabstract: 'In this paper, we present the concept of boosting the resiliency of optimization-based observers for cyber-physical systems (CPS) using auxiliary sources of information. Due to the tight coupling of physics, communication and computation, a malicious agent can exploit multiple inherent vulnerabilities in order to inject stealthy signals into the measurement process. The problem setting considers the scenario in which an attacker strategically corrupts portions of the data in order to force wrong state estimates which could have catastrophic consequences. The goal of the proposed observer is to compute the true states in-spite of the adversarial corruption. In the formulation, we use a measurement prior distribution generated by the auxiliary model to refine the feasible region of a traditional compressive sensing-based regression problem. A constrained optimization-based observer is developed using *$l_1$*-minimization scheme. Numerical experiments show that the solution of the resulting problem recovers the true states of the system. The developed algorithm is evaluated through a numerical simulation example of the IEEE 14-bus system.'\nauthor:\n- \nbibliography:\n- 'refs.bib'\n---\n\n****\n\nIntroduction\n============\n\n-physical systems (CPS) are engineered systems that are built from, and depend upon, the seamless integration of cyber and physical components. Hence, CPS are tightly integrated" +"---\nabstract: 'Spike (S) glycoproteins mediate the coronavirus entry into the host cell. The S1 subunit of S-proteins contains the receptor-binding domain (RBD) that is able to recognize different host receptors, highlighting its remarkable capacity to adapt to their hosts along the viral evolution. While RBD in spike proteins is determinant for the virus-receptor interaction, the active residues lie at the receptor-binding motif (RBM), a region located in RBD that plays a fundamental role binding the outer surface of their receptors. Here, we address the hypothesis that SARS-CoV and SARS-CoV-2 strains able to use angiotensin-converting enzyme 2 (ACE2) proteins have adapted their RBM along the viral evolution to explore specific conformational topology driven by the residues YGF to infect host cells. We also speculate that this YGF-based mechanism can act as a protein signature located at the RBM to distinguish coronaviruses able to use ACE2 as a cell entry receptor.'\nauthor:\n- 'Patr\u00edcia P. D. Carvalho'\n- 'Nelson A. Alves'\ntitle: 'Featuring ACE2 binding SARS-CoV and SARS-CoV-2 through a conserved evolutionary pattern of amino acid residues'\n---\n\nIntroduction\n============\n\nViruses are the most numerous type of biological entity on Earth and the identification of novel viruses continues to enlarge the" +"---\nabstract: |\n For both the cubic Nonlinear Schr\u00f6dinger Equation (NLS) as well as the modified Korteweg-de Vries (mKdV) equation in one space dimension we consider the set ${\\mathbf M}_N$ of pure $N$-soliton states, and their associated multisoliton solutions. We prove that (i) the set ${\\mathbf M}_N$ is a uniformly smooth manifold, and (ii) the ${\\mathbf M}_N$ states are uniformly stable in $H^s$, for each $s>-\\frac12$.\n\n One main tool in our analysis is an iterated [B\u00e4cklund ]{}transform, which allows us to nonlinearly add a multisoliton to an existing soliton free state (the soliton addition map) or alternatively to remove a multisoliton from a multisoliton state (the soliton removal map). The properties and the regularity of these maps are extensively studied.\naddress:\n- |\n Mathematisches Institut\\\n Universit\u00e4t Bonn \n- |\n Department of Mathematics\\\n University of California, Berkeley\nauthor:\n- Herbert Koch\n- Daniel Tataru\nbibliography:\n- 'nls.bib'\ntitle: 'Multisolitons for the cubic NLS in 1-d and their stability'\n---\n\nIntroduction\n============\n\nIn this article we consider the focusing cubic Nonlinear Schr\u00f6dinger equation (NLS) $$i u_t + u_{xx} + 2 u |u|^2 = 0, \\qquad u(0) = u_0,\n\\label{nls}$$ and the complex focusing modified Korteweg-de Vries equation (mKdV) $$u_t + u_{xxx} +" +"---\nabstract: 'When people notice something unusual, they discuss it on social media. They leave traces of their emotions via text expressions. A systematic collection, analysis, and interpretation of social media data across time and space can give insights on local outbreaks, mental health, and social issues. Such timely insights can help in developing strategies and resources with an appropriate and efficient response. This study analysed a large Spatio-temporal tweet dataset of the Australian sphere related to COVID19. The methodology included a volume analysis, dynamic topic modelling, sentiment detection, and semantic brand score to obtain an insight on the COVID19 pandemic outbreak and public discussion in different states and cities of Australia over time. The obtained insights are compared with independently observed phenomena such as government reported instances.'\nauthor:\n- 'Md Abul Bashar, Richi Nayak, Thirunavukarasu Balasubramaniam'\nbibliography:\n- 'references.bib'\ntitle: 'Topic, Sentiment and Impact Analysis: COVID19 Information Seeking on Social Media'\n---\n\n<ccs2012> <concept> <concept\\_id>10010405.10010455.10010461</concept\\_id> <concept\\_desc>Applied computing\u00a0Sociology</concept\\_desc> <concept\\_significance>300</concept\\_significance> </concept> </ccs2012>\n\nIntroduction\n============\n\nAn outbreak of infectious diseases such as COVID19 has a devastating impact on society with severe socio-economic consequences. The COVID19 pandemic has already caused the largest global recession in history; global stock markets have crashed, travel" +"---\nabstract: 'In this paper, the effect of B and N doping on the phonon induced thermal conductivity of graphene has been investigated. This study is important when one has to evaluate the usefulness of electronic properties of B and N doped graphene. We have performed the calculations by employing density functional perturbation theory(DFPT) to calculate the inter-atomic forces$/$force constants of pristine/doped graphene. Thermal conductivity calculations have been carried out by making use of linearized Boltzmann transport equations (LBTE) under single-mode relaxation time approximation(RTA). The thermal conductivity of pristine graphene has been found to be of the order of 4000W/mK at 100K, which decreases gradually with an increase in temperature. The thermal conductivity decreases drastically by 96 $\\%$ to 190 W/mK when doped with 12.5 $\\%$ B and reduces by 99 $\\%$ to 30 W/mK with 25 $\\%$ B doping. When graphene is doped with N, the thermal conductivity decreases to 4 W/mK and 55 W/mK for 12.5 $\\%$ and 25 $\\%$ doping concentration, respectively. We have found that the thermal conductivity of doped graphene show less sensitivity to change in temperature. It has also been shown that the thermal conductivity of graphene can be tuned with doping and has" +"---\nabstract: 'A dominating set $D$ of a graph $G$ without isolated vertices is called semipaired dominating set if $D$ can be partitioned into $2$-element subsets such that the vertices in each set are at distance at most $2$. The semipaired domination number, denoted by $\\gamma_{pr2}(G)$ is the minimum cardinality of a semipaired dominating set of $G$. Given a graph $G$ with no isolated vertices, the Minimum Semipaired Domination problem is to find a semipaired dominating set of $G$ of cardinality $\\gamma_{pr2}(G)$. The decision version of the Minimum Semipaired Domination problem is already known to be NP-complete for chordal graphs, an important graph class. In this paper, we show that the decision version of the Minimum Semipaired Domination problem remains NP-complete for split graphs, a subclass of chordal graphs. On the positive side, we propose a linear-time algorithm to compute a minimum cardinality semipaired dominating set of block graphs. In addition, we prove that the Minimum Semipaired Domination problem is APX-complete for graphs with maximum degree $3$.'\nauthor:\n- 'Michael A. Henning'\n- Arti Pandey\n- Vikash Tripathi\nbibliography:\n- 'Semi.bib'\nnocite: '[@*]'\ntitle: Semipaired Domination in Some Subclasses of Chordal Graphs\n---\n\nIntroduction {#sec:1}\n============" +"---\nabstract: '[Understanding the nature of the mysterious pseudogap phenomenon is one of the most important issues associated with cuprate high-$T_c$ superconductors. Here, we report $^{17}$O nuclear magnetic resonance (NMR) studies on two planar oxygen sites in stoichiometric cuprate YBa$_2$Cu$_4$O$_8$ to investigate the symmetry breaking inside the pseudogap phase. We observe that the Knight shifts of the two oxygen sites are identical at high temperatures but different below $T_{\\rm nem} \\sim$ 185 K, which is close to the pseudogap temperature $T^{\\ast}$. Our result provides a microscopic evidence for intra-unit-cell electronic nematicity. The difference in quadrupole resonance frequency between the two oxygen sites is unchanged below $T_{\\rm nem}$, which suggests that the observed nematicity does not directly stem from the local charge density modulation. Furthermore, a short-range charge density wave (CDW) order is observed below $T \\simeq$ 150 K. The additional broadening in the $^{17}$O-NMR spectra because of this CDW order is determined to be inequivalent for the two oxygen sites, which is similar to that observed in case of nematicity. These results suggest a possible connection between nematicity, CDW order, and pseudogap.]{}'\nauthor:\n- 'W. Wang'\n- 'J. Luo'\n- 'C. G. Wang'\n- 'J. Yang'\n- 'Y. Kodama'\n-" +"---\nabstract: |\n Medical diagnoses can shape and change the life of a person drastically. Therefore, it is always best advised to collect as much evidence as possible to be certain about the diagnosis. Unfortunately, in the case of the Brugada Syndrome (BrS), a rare and inherited heart disease, only one diagnostic criterion exists, namely, a typical pattern in the Electrocardiogram (ECG).\n\n In the following treatise, we question whether the investigation of ECG strips by the means of machine learning methods improves the detection of BrS positive cases and hence, the diagnostic process. We propose a pipeline that reads in scanned images of ECGs, and transforms the encaptured signals to digital time-voltage data after several processing steps. Then, we present a long short-term memory (LSTM) classifier that is built based on the previously extracted data and that makes the diagnosis.\n\n The proposed pipeline distinguishes between three major types of ECG images and recreates each recorded lead signal. Features and quality are retained during the digitization of the data, albeit some encountered issues are not fully removed (Part I). Nevertheless, the results of the aforesaid program are suitable for further investigation of the ECG by a computational method such as the" +"---\nauthor:\n- '[^1]'\n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \nsubtitle: Columnar Object Framework For Effective Analysis\ntitle: Coffea\n---\n\nIntroduction {#sec:intro}\n============\n\nThe present challenge for High-Energy Particle Physics (HEP) data analysts is daunting: due to the success of the Large Hadron Collider (LHC) data collection campaign over Run 2 (2015-2018), the Compact Muon Solenoid (CMS) detector has amassed a dataset of order 10 billion proton-proton collision events. The raw detector information is reconstructed into high-level information, such as the trajectories of visible outgoing subatomic particles, using a centrally-maintained software\u00a0[@CMSSW] and distributed computing system\u00a0[@WLCG]. Even after significant processing and distillation, this high-level summary of each collision event still contains order 1 kilobyte of compressed data\u00a0[@NanoAOD]. The CMS physicist/data-analyst is tasked with processing the resulting tens of terabytes of distilled data (along with a similar magnitude of simulation data) in a mostly autonomous fashion, typically designing (or inheriting) a processing framework written in C++ or Python using a set of libraries known as the ROOT framework\u00a0[@ROOT], and parallelizing the processing over distributed computing resources using HTCondor\u00a0[@Condor] or similar high-throughput computing systems. Each physicist is interested in a different subset of this" +"---\nabstract: |\n Sparse rewards present a difficult problem in reinforcement learning and may be inevitable in certain domains with complex dynamics such as real-world robotics. Hindsight Experience Replay (HER) is a recent replay memory development that allows agents to learn in sparse settings by altering memories to show them as successful even though they may not be. While, empirically, HER has shown some success, it does not provide guarantees around the makeup of samples drawn from an agent\u2019s replay memory. This may result in minibatches that contain only memories with zero-valued rewards or agents learning an undesirable policy that completes HER-adjusted goals instead of the actual goal.\n\n In this paper, we introduce *Or Your Money Back* (OYMB), a replay memory sampler designed to work with HER. OYMB improves training efficiency in sparse settings by providing a direct interface to the agent\u2019s replay memory that allows for control over minibatch makeup, as well as a preferential lookup scheme that prioritizes real-goal memories before HER-adjusted memories. We test our approach on five tasks across three unique environments. Our results show that using HER in combination with OYMB outperforms using HER alone and leads to agents that learn to complete the real" +"=1\n\nIntroduction\n============\n\nGromov proposed studying the geometry of scalar curvature via various metric inequalities which have similarities to classical Riemannian comparison geometry, see in particular\u00a0[@Gromov:MetricInequalitiesScalar; @gromovFourLecturesScalar2019]. One instance of such an inequality is the following estimate on the widths of Riemannian bands.\n\n\\[conj:band-width\\] Let $M$ be a closed manifold of dimension $n-1\\neq 4$ such that $M$ does not admit a metric of positive scalar curvature. Then every Riemannian metric $g$ on $V = M \\times [-1,1]$ of scalar curvature bounded below by $n(n-1) = \\operatorname{scal}_{\\Sphere^n}$ satisfies $$\\operatorname{width}(V, g) \\coloneqq \\operatorname{dist}_g(\\partial_- V, \\partial_+ V) \\leq \\frac{2\\pi}{n},$$ where $\\partial_\\pm V = M \\times \\{\\pm 1\\}$.\n\nRosenberg and Stolz previously proposed a seemingly related conjecture, see\u00a0[@Rosenberg-Stolz:Manifolds-of-psc Section\u00a07]:\n\n\\[conj:no-complete-psc\\] Let $M$ be a closed manifold of dimension $\\neq 4$ such that $M$ does not admit a\u00a0metric of positive scalar curvature. Then $M \\times \\R$ does not admit a complete metric of positive scalar curvature.\n\nWhile these two conjectures appear similar, there is no direct formal implication between them. Conjecture\u00a0\\[conj:band-width\\] only implies that $M \\times \\R$ does not admit a complete metric of *uniformly* positive scalar curvature, a weaker conclusion than what is desired by Conjecture\u00a0\\[conj:no-complete-psc\\].\n\nWe have" +"---\nabstract: 'Despite impressive advancements in Autonomous Driving Systems (ADS), navigation in complex road conditions remains a challenging problem. There is considerable evidence that evaluating the subjective risk level of various decisions can improve ADS\u2019 safety in both normal and complex driving scenarios. However, existing deep learning-based methods often fail to model the relationships between traffic participants and can suffer when faced with complex real-world scenarios. Besides, these methods lack *transferability* and *explainability*. To address these limitations, we propose a novel data-driven approach that uses *scene-graphs* as intermediate representations. Our approach includes a Multi-Relation Graph Convolution Network, a Long-Short Term Memory Network, and attention layers for modeling the subjective risk of driving maneuvers. To train our model, we formulate this task as a supervised scene classification problem. We consider a typical use case to demonstrate our model\u2019s capabilities: lane changes. We show that our approach achieves a higher classification accuracy than the state-of-the-art approach on both large (96.4% vs. 91.2%) and small (91.8% vs. 71.2%) synthesized datasets, also illustrating that our approach can learn effectively even from smaller datasets. We also show that our model trained on a synthesized dataset achieves an average accuracy of 87.8% when tested on a" +"---\nauthor:\n- |\n [Ce Xu${}^{a,}$[^1]and Jianqiang Zhao${}^{b,}$[^2]]{}\\\n a. School of Math. and Statistics, Anhui Normal University, Wuhu 241000, P.R. China\\\n b. Department of Mathematics, The Bishop\u2019s School, La Jolla, CA 92037, USA\\\n \\[5mm\\] Dedicated to professor Masanobu Kaneko on the occasion of his 60th birthday\ntitle: '**Variants of Multiple Zeta Values with Even and Odd Summation Indices**'\n---\n\n[**Abstract.**]{} In this paper, we define and study a variant of multiple zeta values of level 2 (which is called multiple mixed values or multiple $M$-values, MMVs for short), which forms a subspace of the space of alternating multiple zeta values. This variant includes both Hoffman\u2019s multiple $t$-values and Kaneko-Tsumura\u2019s multiple $T$-values as special cases. We set up the algebra framework for the double shuffle relations (DBSFs) of the MMVs, and exhibits nice properties such as duality, integral shuffle relation, series stuffle relation, etc., similar to ordinary multiple zeta values. Moreover, we study several $T$-variants of Kaneko-Yamamoto type multiple zeta values by establishing some explicit relations between these $T$-variants and Kaneko-Tsumura $\\psi$-values. Furthermore, we prove that all Kaneko-Tsumura $\\psi$-values can be expressed in terms of Kaneko-Tsumura multiple $T$-values by using multiple associated integrals, and find some duality formulas for Kaneko-Tsumura $\\psi$-values." +"---\nabstract: 'Based on a 3D supernova simulation of an $11.8\\,M_\\odot$ progenitor model with initial solar composition, we study the nucleosynthesis using tracers covering the innermost $0.1\\,M_\\odot$ of the ejecta. These ejecta are mostly proton-rich and contribute significant amounts of $^{45}$Sc and $^{64}$Zn. The production of heavier isotopes is sensitive to the electron fraction and hence the neutrino emission from the proto-neutron star. The yields of these isotopes are rather uncertain due to the approximate neutrino transport used in the simulation. In order to obtain the total yields for the whole supernova, we combine the results from the tracers with those for the outer layers from a suitable 1D model. Using the yields of short-lived radionuclides (SLRs), we explore the possibility that an $11.8\\,M_\\odot$ supernova might have triggered the formation of the solar system and provided some of the SLRs measured in meteorites. In particular, we discuss two new scenarios that can account for at least the data on $^{41}$Ca, $^{53}$Mn, and $^{60}$Fe without exceeding those on the other SLRs.'\nauthor:\n- 'A. Sieverding'\n- 'B.\u00a0M\u00fcller'\n- 'Y.-Z. Qian'\ntitle: 'Nucleosynthesis of an $11.8\\,M_\\odot$ Supernova with 3D Simulation of the Inner Ejecta: Overall Yields and Implications for Short-Lived Radionuclides" +"---\nabstract: 'Steemit is a blockchain-based social media platform, where authors can get author rewards in the form of cryptocurrencies called STEEM and SBD (Steem Blockchain Dollars) if their posts are upvoted. Interestingly, curators (or voters) can also get rewards by voting others\u2019 posts, which is called a curation reward. A reward is proportional to a curator\u2019s STEEM stakes. Throughout this process, Steemit hopes \u201cgood\u201d content will be automatically discovered by users in a decentralized way, which is known as the Proof-of-Brain (PoB). However, there are many bot accounts programmed to post automatically and get rewards, which discourages real human users from creating good content. We call this type of bot a posting bot. While there are many papers that studied bots on traditional centralized social media platforms such as Facebook and Twitter, we are the first to study posting bots on a blockchain-based social media platform. Compared with the bot detection on the usual social media platforms, the features we created have an advantage that posting bots can be detected without limiting the number or length of posts. We can extract the features of posts by clustering distances between blog data or replies. These features are obtained from the" +"---\nabstract: 'We report on spectral variability of the blazar 3C\u00a0279 in the optical to X-ray band between MJD\u00a055100 and 58400 during which long-term radio variability was observed. We construct light curves and band spectra in each of the optical ($2\\times10^{14}$\u2013$1.5\\times10^{15}$Hz) and X-ray (0.3\u201310keV) bands, measure the spectral parameters (flux $F$ and spectral index $\\alpha$), and investigate correlation between $F$ and $\\alpha$ within and across the bands. We find that the correlation of the optical properties dramatically change after $\\sim$MJD\u00a055500 and the light curves show more frequent activity after $\\sim$MJD\u00a057700. We therefore divide the time interval into three \u201cstates\u201d based on the correlation properties and source activity in the light curves, and analyze each of the three states separately. We find various correlations between the spectral parameters in the states and an intriguing 65-day delay of the optical emission with respect to the X-ray one in state\u00a02 (MJD\u00a055500\u201357700). We attempt to explain these findings using a one-zone synchro-Compton emission scenario.'\nauthor:\n- Sungmin Yoo and Hongjun An\nbibliography:\n- '3C279\\_apj.bib'\ntitle: 'Spectral variability of the blazar 3C\u00a0279 in the optical to X-ray band during 2009\u20132018'\n---\n\nIntroduction {#sec:intro}\n============\n\nBlazars, the most energetic" +"---\nabstract: 'Ion-pair dissociation (IPD) to gas phase carbon dioxide molecule has been studied using time of flight (TOF) based mass spectroscopy in combination with the highly differential velocity slice imaging (VSI) technique. The appearance energy of the fragmented $O^{-}$ ion provides the experimental threshold energy value for the ion-pair production. The kinetic energy (KE) distributions and angular distributions (AD) of the fragment anion dispense the detailed insight to the IPD dynamics. The KE distribution clearly reveals that the IPD dynamics may be due to the direct access to the ion-pair states. However, indirect mechanism can\u2019t be ruled out at higher incident electron energies. The angular distribution data unambiguously identified the involvement of ion-pair state associated with $\\Sigma$ symmetry and minor contribution from $\\Pi$ symmetric states. Computational calculations using density functional theory (DFT) strongly support the experimental observations.'\nauthor:\n- Narayan Kundu\n- Sumit Naskar\n- Irina Jana\n- Anirban Paul\n- Dhananjay Nandi\nbibliography:\n- 'mybibfile.bib'\ntitle: 'Ion-pair dissociation dynamics in electron collision with carbon dioxide probed by velocity slice imaging'\n---\n\nIntroduction\n============\n\nIn recent years dynamics study with anion fragments has marked a new realm in the field of electron induced chemistry [@nikjoo1997computational; @boamah2014low]. Dissociative electron attachment" +"---\nabstract: |\n Authorship identification is the process of identifying and classifying authors through given codes. Authorship identification can be used in a wide range of software domains, [e.g.]{}, code authorship disputes, plagiarism detection, exposure of attackers\u2019 identity. Besides the inherent challenges from legacy software development, framework programming and crowdsourcing mode in Android raise the difficulties of authorship identification significantly. More specifically, widespread third party libraries and inherited components ([e.g.]{}, classes, methods, and variables) dilute the primary code within the entire Android app and blur the boundaries of code written by different authors. However, prior research has not well addressed these challenges.\n\n To this end, we design a two-phased approach to attribute the primary code of an Android app to the specific developer. In the first phase, we put forward three types of strategies to identify the relationships between Java packages in an app, which consist of context, semantic and structural relationships. A package aggregation algorithm is developed to cluster all packages that are of high probability written by the same authors. In the second phase, we develop three types of features to capture authors\u2019 coding habits and code stylometry. Based on that, we generate fingerprints for an author from" +"---\nauthor:\n- 'Santtu Tikka$^1$[^1]'\n- Jussi Hakanen$^2$\n- Mirka Saarela$^2$\n- Juha Karvanen$^1$\nbibliography:\n- 'references.bib'\n- 'litReview.bib'\ndate: |\n $^1$Department of Mathematics and Statistics, University of Jyvaskyla, Finland\\\n $^2$Faculty of Information Technology, University of Jyvaskyla, Finland\\\ntitle: 'Simulation Framework for Realistic Large-scale Individual-level Data Generation with an Application in the Health Domain'\n---\n\n\\\n\nIntroduction {#sec:intro}\n============\n\nSimulation is an important tool for decision making and scenario analysis, and healthcare is an example of a field where the use of simulation can provide substantial benefits. Simulated health data allow us to predict the population level development of risk factors, disease occurrence and case specific mortality under the given assumptions. Then, the predictions can be used in evaluating the effects of different policies and interventions in medical decision making in a prescriptive analytics approach. Simulated data are important also for statistical method development because the underlying true parameters are known and the obtained estimates can be easily compared with them. A simulation may start with real individual level data and simulate the future development with different assumptions. However, real health data are associated with legal and privacy concerns that complicate and sometimes even prevent their use in method development." +"---\nabstract: 'Conventional displacement sensing techniques (e.g., laser, linear variable differential transformer) have been widely used in structural health monitoring in the past two decades. Though these techniques are capable of measuring displacement time histories with high accuracy, distinct shortcoming remains such as point-to-point contact sensing which limits its applicability in real-world problems. Video cameras have been widely used in the past years due to advantages that include low price, agility, high spatial sensing resolution, and non-contact. Compared with target tracking approaches (e.g., digital image correlation, template matching, etc.), the phase-based method is powerful for detecting small subpixel motions without the use of paints or markers on the structure surface. Nevertheless, the complex computational procedure limits its real-time inference capacity. To address this fundamental issue, we develop a deep learning framework based on convolutional neural networks (CNNs) that enable real-time extraction of full-field subpixel structural displacements from videos. In particular, two new CNN architectures are designed and trained on a dataset generated by the phase-based motion extraction method from a single lab-recorded high-speed video of a dynamic structure. As displacement is only reliable in the regions with sufficient texture contrast, the sparsity of motion field induced by the texture mask" +"---\nabstract: 'A wide range of techniques can be considered for segmentation of images of nanostructured surfaces. Manually segmenting these images is time-consuming and results in a user-dependent segmentation bias, while there is currently no consensus on the best automated segmentation methods for particular techniques, image classes, and samples. Any image segmentation approach must minimise the noise in the images to ensure accurate and meaningful statistical analysis can be carried out. Here we develop protocols for the segmentation of images of 2D assemblies of gold nanoparticles formed on silicon surfaces via deposition from an organic solvent. The evaporation of the solvent drives far-from-equilibrium self-organisation of the particles, producing a wide variety of nano- and micro-structured patterns. We show that a segmentation strategy using the U-Net convolutional neural network outperforms traditional automated approaches and has particular potential in the processing of images of nanostructured systems.'\naddress:\n- 'School of Science, Loughborough University, Epinal Way, Loughborough, LE11 3TU, United Kingdom'\n- 'School of Physics & Astronomy, The University of Nottingham, University Park, Nottingham, NG7 2RD, United Kingdom'\n- 'School of Physics & Astronomy, The University of Nottingham, University Park, Nottingham, NG7 2RD, United Kingdom'\n- 'School of Mechanical, Electrical and Manufacturing Engineering," +"---\nabstract: 'Many key industrial processes, from electricity production, conversion and storage to electrocatalysis or electrochemistry in general, rely on physical mechanisms occurring at the interface between a metallic electrode and an electrolyte solution, summarized by the concept of electric double layer, with the accumulation/depletion of electrons on the metal side and of ions on the liquid side. While electrostatic interactions play an essential role on the structure, thermodynamics, dynamics and reactivity of electrode-electrolyte interfaces, these properties also crucially depend on the nature of the ions and solvent, as well as that of the metal itself. Such interfaces pose many challenges for modeling, because they are a place where Quantum Chemistry meets Statistical Physics. In the present review, we explore the recent advances on the description and understanding of electrode-electrolyte interfaces with classical molecular simulations, with a focus on planar interfaces and solvent-based liquids, from pure solvent to water-in-salt-electrolytes.'\nauthor:\n- 'Laura Scalfi,$^1$ Mathieu Salanne,$^{1,2}$ and Benjamin Rotenberg$^{1,2}$'\ndate: 'Aug. 2020'\ntitle: 'Molecular Simulation of Electrode-Solution Interfaces'\n---\n\nINTRODUCTION\n============\n\nMany key industrial processes, from electricity production, conversion and storage\u00a0[@salanne2016a], to electrocatalysis or electrochemistry in general\u00a0[@seh_combining_2017], rely on physical mechanisms occurring at the interface between a metallic solid" +"---\nabstract: |\n Dialog State Tracking (DST) is one of the most crucial modules for goal-oriented dialogue systems. In this paper, we introduce FastSGT (Fast Schema Guided Tracker), a fast and robust BERT-based model for state tracking in goal-oriented dialogue systems. The proposed model is designed for the Schema-Guided Dialogue (SGD) dataset which contains natural language descriptions for all the entities including user intents, services, and slots. The model incorporates two carry-over procedures for handling the extraction of the values not explicitly mentioned in the current user utterance. It also uses multi-head attention projections in some of the decoders to have a better modelling of the encoder outputs.\n\n In the conducted experiments we compared FastSGT to the baseline model for the SGD dataset. Our model keeps the efficiency in terms of computational and memory consumption while improving the accuracy significantly. Additionally, we present ablation studies measuring the impact of different parts of the model on its performance. We also show the effectiveness of data augmentation for improving the accuracy without increasing the amount of computational resources.\nauthor:\n- Vahid Noroozi\n- Yang Zhang\n- Evelina Bakhturina\n- Tomasz Kornuta\nbibliography:\n- 'sample-base.bib'\ntitle: 'A Fast and Robust BERT-based Dialogue State" +"---\nabstract: 'As a vital topic in media content interpretation, video anomaly detection (VAD) has made fruitful progress via deep neural network (DNN). However, existing methods usually follow a reconstruction or frame prediction routine. They suffer from two gaps: (1) They cannot localize video activities in a both precise and comprehensive manner. (2) They lack sufficient abilities to utilize high-level semantics and temporal context information. Inspired by frequently-used *cloze test* in language study, we propose a brand-new VAD solution named *Video Event Completion* (VEC) to bridge gaps above: First, we propose a novel pipeline to achieve both precise and comprehensive enclosure of video activities. Appearance and motion are exploited as mutually complimentary cues to localize regions of interest (RoIs). A normalized spatio-temporal cube (STC) is built from each RoI as a *video event*, which lays the foundation of VEC and serves as a basic processing unit. Second, we encourage DNN to capture high-level semantics by solving a *visual cloze test*. To build such a visual cloze test, a certain patch of STC is erased to yield an incomplete event (IE). The DNN learns to restore the original video event from the IE by inferring the missing patch. Third, to incorporate" +"---\nabstract: 'Can artificial intelligence systems exhibit superhuman performance, but in critical ways, lack the intelligence of even a single-celled organism? The answer is clearly \u2018yes\u2019 for narrow AI systems. Animals, plants, and even single-celled organisms learn to reliably avoid danger and move towards food. This is accomplished via a physical knowledge-preserving metamodel that autonomously generates useful models of the world. We posit that preserving the structure of knowledge is critical for higher intelligences that manage increasingly higher levels of abstraction, be they human or artificial. This is the key lesson learned from applying AGI subsystems to complex real-world problems that require continuous learning and adaptation. In this paper, we introduce the Deep Fusion Reasoning Engine (DFRE), which implements a knowledge-preserving metamodel and framework for constructing applied AGI systems. The DFRE metamodel exhibits some important fundamental knowledge preserving properties such as clear distinctions between symmetric and anti-symmetric relations, and the ability to create a hierarchical knowledge representation that clearly delineates between levels of abstraction. The DFRE metamodel, which incorporates these capabilities, demonstrates how this approach benefits AGI in specific ways such as managing combinatorial explosion and enabling cumulative, distributed and federated learning. Our experiments show that the proposed framework achieves" +"---\nauthor:\n- 'Kuangen Zhang$^{1, 2, 3}$, Jongwoo Lee$^{2}$, Zhimin Hou$^{4}$, Clarence W. de Silva$^{3}$, Chenglong Fu$^{1}$, Neville Hogan$^{2}$ [^1][^2][^3][^4][^5][^6] [^7]'\nbibliography:\n- 'main.bib'\ntitle: '**How does the structure embedded in learning policy affect learning quadruped locomotion?** '\n---\n\nMulti-legged robots, sensorimotor learning, compliance and impedance control.\n\nINTRODUCTION {#sec:introduction}\n============\n\nUsing RL, mono-, bi-, and quadruped robots were successfully trained to walk in rich environments [@lillicrap_continuous_2015; @zhang_teach_2019]. Combining imitation learning with RL, Peng [*et al.* ]{}trained a biped robot in simulation to walk in complex environments and even mimic complex human behaviors, such as backflip and dancing\u00a0[@peng_deeploco_2017; @peng_deepmimic:_2018]. Xie [*et al.* ]{}utilized a reference trajectory to train and control a real biped robot (Cassie) to walk in a stable manner\u00a0[@xie_iterative_2019]. Transferring a neural network policy learned in a simulation environment to the real world has also been successful\u00a0[@hwangbo_learning_2019; @singla_realizing_2019].\n\nOne of the popular methods in RL for robotics is to learn a direct neural network policy that maps from robot states to joint torques\u00a0[@haarnoja_soft_2018; @fujimoto_addressing_2018; @hou_off-policy_2020]. There are several advantages to the direct policy. First, it requires little information of the robot model, hence it can be used for a general class of robots and tasks" +"---\nauthor:\n- '[[](https://orcid.org/0000-0003-1058-3396)]{}\\'\n---\n\n=1\n\nIntroduction\n============\n\nElectromagnetic radiation beams are attenuated by passing through an absorbent material. The Lambert-Beer (L-B) law describes how this attenuation depends on the concentration of the absorbent particles and on the optical path, provided that certain conditions are met[@skoog2003fundamentals]. This work has two principal aims: firstly to provide simple yet rigorous derivations of the L-B law useful to be taught in the classroom. Secondly, to broaden our current understanding of this law by approaching it from different viewpoints.\n\nMany derivations of the L-B law have been proposed[@bare2000; @Strong; @Swinehart; @Pinkerton1964; @Lykos1992; @Santos; @Daniels]. From an abstract point of view, the derivations at some point state a relationship between internal transmittance ($T$) and concentration ($c$) or optical path ($b$) satisfied only by the exponential function. For this, different approaches can be followed. In \u00a7\\[Sec: Deduccion 1\\] a very brief and simple derivation of the L-B law is proposed. What makes this derivation accessible is that the relationship employed is the exponential identity $a^{x+y}=a^{x}a^{y}$ which is simple and known from introductory courses.\n\nBerberan-Santos[@Santos] and Daniels[@Daniels] proposed proofs closely connected to gas kinetic theory. These are pretty rigorous and provided a clear picture of the phenomenon." +"---\nabstract: |\n ------------------------------------------------------------------------\n\n The $X(3872)$ resonance has been conjectured to be a $J^{PC} = 1^{++}$ charm meson-antimeson two-body molecule. Meanwhile, there is no experimental evidence for larger, few-body compounds of multiple charm meson-antimeson pairs which would resemble larger molecules or nuclei. Here, we investigate such multi-meson states to the extent of what can be deduced theoretically from essentials of the interaction between uncharged $D^{0}$ and $D^{*0}$ mesons. From a molecular $X(3872)$, we predict a $4X$ ($4^{++}$) octamer with a binding energy assuming a $D^{*0} \\bar{D}^0$ system close to the unitary limit (as suggested by the mass of the $X(3872)$). If we consider heavy-quark spin symmetry explicitly, the $D^{*0} \\bar{D}^{*0}$ ($2^{++}$) system is close to unitarity, too. In this case, we predict a bound $3X$ ($3^{++}$) hexamer with $B_{3X} > 2.29\\,{\\rm MeV}$ and a more deeply bound $4X$ octamer with $B_{4X} > 11.21\\,{\\rm MeV}$. These results exemplify with hadronic molecules a more general phenomenon of equal-mass two-species Bose systems comprised of equal number of either type: the emergence of unbound four- and six-boson clusters in the limit of a short-range two-body interaction which acts only between bosons of different species. Finally, we also study the conditions under which a $2X$" +"---\nabstract: 'We prove that finitely generated Kleinian groups $\\Gamma<\\operatorname{{\\mathrm Isom}}(\\H^{n})$ with small critical exponent are always convex-cocompact. Along the way, we also prove some geometric properties for any complete pinched negatively curved manifold with critical exponent less than 1.'\naddress:\n- 'School of Mathematics, Georgia Institute of Technology, Atlanta, GA 30332, USA'\n- 'Department of mathematics, Michigan State University, East Lansing, MI 48824, USA'\nauthor:\n- Beibei Liu\n- Shi Wang\ntitle: Discrete subgroups of small critical exponent\n---\n\n[***Index terms\u2014*** discrete subgroups, critical exponent, convex cocompactness.]{}\n\nIntroduction {#sec:introduction}\n============\n\nA *Kleinian* group is a discrete isometry subgroup of $\\operatorname{{\\mathrm Isom}}(\\H^{n})$. The study of $3$-dimensional finitely generated Kleinian groups dates back to Schottky, Poincar\u00e9 and Klein. It is only recently that the geometric picture of the associated hyperbolic manifold has been much better understood, after the celebrated work of Ahlfors\u2019 finiteness theorem [@Ah], the proof of the tameness conjecture [@Bon; @Agol; @Gabai], and the unraveling of the Ending Lamination Conjecture [@Min; @BCM; @som; @bow2]. However, such geometric descriptions fail in higher dimensions [@Kap1; @Kap2; @KP1; @KP2; @Poty; @poty2].\n\nTo study higher dimensional Kleinian groups, one way is to consider the interplay between the group theoretic properties, the geometry of" +"---\nabstract: 'Wireless energy transfer (WET) is a green enabler of low-power Internet of Things (IoT). Therein, traditional optimization schemes relying on full channel state information (CSI) are often too costly to implement due to excessive energy consumption and high processing complexity. This letter proposes a simple, yet effective, energy beamforming scheme that allows a multi-antenna power beacon (PB) to fairly power a set of IoT devices by only relying on the first-order statistics of the channels. In addition to low complexity, the proposed scheme performs favorably as compared to benchmarking schemes and its performance improves as the number of PB\u2019s antennas increases. Finally, it is shown that further performance improvement can be achieved through proper angular rotations of the PB.'\nauthor:\n- 'Onel L. A. L\u00f3pez, Francisco\u00a0A.\u00a0Monteiro, Hirley Alves, Rui Zhang, and Matti Latva-aho, [^1] [^2] [^3] [^4] [^5]'\nbibliography:\n- 'IEEEabrv.bib'\n- 'references.bib'\ntitle: 'A Low-Complexity Beamforming Design for Multiuser Wireless Energy Transfer'\n---\n\nWET, statistical CSI, first-order statistics, energy beamforming, IoT, antenna rotation.\n\nIntroduction {#intro}\n============\n\nWireless energy transfer (WET) technology is widely recognized as a green enabler of low-power Internet of Things (IoT) since it realizes [@Lopez.2019]: i) battery charging without physical connections, which" +"---\nabstract: 'A common task for recommender systems is to build a profile of the interests of a user from items in their browsing history and later to recommend items to the user from the same catalog. The users\u2019 behavior consists of two parts: the sequence of items that they viewed without intervention (the organic part) and the sequences of items recommended to them and their outcome (the bandit part). In this paper, we propose *Bayesian Latent Organic Bandit model (BLOB)*, a probabilistic approach to combine the \u2018organic\u2019 and \u2018bandit\u2019 signals in order to improve the estimation of recommendation quality. The bandit signal is valuable as it gives direct feedback of recommendation performance, but the signal quality is very uneven, as it is highly concentrated on the recommendations deemed optimal by the past version of the recommender system. In contrast, the organic signal is typically strong and covers most items, but is not always relevant to the recommendation task. In order to leverage the organic signal to efficiently learn the bandit signal in a Bayesian model we identify three fundamental types of distances, namely action-history, action-action and history-history distances. We implement a scalable approximation of the full model using variational" +"---\nabstract: 'Applications in a range of domains, including route planning and well-being, offer advice based on the social information available in prior users\u2019 aggregated activity. When designing these applications, is it better to offer: a) advice that if strictly adhered to is more likely to result in an individual successfully achieving their goal, even if fewer users will choose to adopt it? or b) advice that is likely to be adopted by a larger number of users, but which is sub-optimal with regard to any particular individual achieving their goal? We identify this dilemma, characterized as *Goal-Directed* vs. *Adoption-Directed* advice, and investigate the design questions it raises through an online experiment undertaken in four advice domains (financial investment, making healthier lifestyle choices, route planning, training for a 5k run), with three user types, and across two levels of uncertainty. We report findings that suggest a preference for advice favoring individual goal attainment over higher user adoption rates, albeit with significant variation across advice domains; and discuss their design implications.'\nauthor:\n- Graham Dove\n- Martina Balestra\n- Devin Mann\n- Oded Nov\nbibliography:\n- 'advice.bib'\ntitle: 'Good for the Many or Best for the Few? A Dilemma in the" +"---\nabstract: |\n We propose a $\\mathcal{C}^0$ Interior Penalty Method (C0-IPM) for the computational modelling of flexoelectricity, with application also to strain gradient elasticity, as a simplified case. Standard high-order $\\mathcal{C}^0$ finite element approximations, with nodal basis, are considered. The proposed C0-IPM formulation involves second derivatives in the interior of the elements, plus integrals on the mesh faces (sides in 2D), that impose $\\mathcal{C}^1$ continuity of the displacement in weak form. The formulation is stable for large enough interior penalty parameter, which can be estimated solving an eigenvalue problem. The applicability and convergence of the method is demonstrated with 2D and 3D numerical examples.\\\n *Keywords:\u00a0* 4th order PDE ,$\\mathcal{C}^0$ finite elements ,interior penalty method ,strain gradient elasticity ,flexoelectricity\nauthor:\n- |\n J. Ventura$^{1,a}$, D. Codony$^{1,b}$, S. Fern\u00e1ndez-M\u00e9ndez$^{1,c,\\ast}$\\\n \\\n \\\n \\\n \\\n \\\n \\\nbibliography:\n- 'BibliografiaFlexo.bib'\ntitle: A C0 interior penalty finite element method for flexoelectricity\n---\n\nIntroduction {#intro}\n============\n\nThe rising interest on microtechnology evidences the need for mathematical and computational models suitable for small scales, often giving rise to $4^\\text{th}$ order Partial Differential Equations (PDEs). In particular flexoelectric effects become relevant, and may be crucial, in the design of small electromechanical devices or for the understanding of" +"---\nabstract: 'A *degree sequence* is a sequence ${\\mathbf{s}}=(N_i,i\\geq 0)$ of non-negative integers satisfying $1+\\sum_i iN_i=\\sum_i N_i<\\infty$. We are interested in the uniform distribution ${\\ensuremath{ \\mathbb{P} } }_{{\\bf s}}$ on rooted plane trees whose degree sequence equals ${\\bf s}$, giving conditions for the convergence of the profile (sequence of generation sizes) as the size of the tree goes to infinity. This provides a more general formulation and a probabilistic proof of a conjecture due to Aldous [@MR1166406]. Our formulation contains and extends results in this direction obtained previously by Drmota and Gittenberger [@MR1608230] and Kersting [@kersting2011height]. A technical result is needed to ensure that trees with law ${\\ensuremath{ \\mathbb{P} } }_{{\\bf s}}$ have enough individuals in the first generations, and this is handled through novel path transformations and fluctuation theory of exchangeable increment processes. As a consequence, we obtain a boundedness criterion for the inhomogeneous continuum random tree introduced by Aldous, Miermont and Pitman [@MR2063375].'\nauthor:\n- 'Osvaldo Angtuncio[^1] Ger\u00f3nimo Uribe Bravo[^2]'\nbibliography:\n- 'GenBib.bib'\ntitle: On the profile of trees with a given degree sequence\n---\n\n*[Keywords:]{}* Configuration model; exchangeable increment processes; Vervaat transform; Lamperti transform.\n\n*[AMS subject classifications:]{}* 05C05; 34A36; 60F17; 60G09; 60G17; 60J80\n\nIntroduction and statement of" +"---\nauthor:\n- 'R. Jarolim'\n- 'A. M. Veronig'\n- 'W. P\u00f6tzi'\n- 'T. Podladchikova'\nbibliography:\n- 'references.bib'\ndate: 'Received 18 June 2020; accepted 24 August 2020'\ntitle: 'Image Quality Assessment for Full-Disk Solar Observations with Generative Adversarial Networks'\n---\n\n[Within the last decades, solar physics has entered the era of big data and the amount of data being constantly produced from ground- and space-based observatories can no longer be purely analyzed by human observers.]{} [In order to assure a stable series of recorded images of sufficient quality for further scientific analysis, an objective image quality measure is required. Especially when dealing with ground-based observations, which are subject to varying seeing conditions and clouds, the quality assessment has to take multiple effects into account and provide information about the affected regions. The automatic and robust identification of quality-degrading effects is a critical task, in order to maximize the scientific return from the observations and to allow for event detections in real-time. In this study, we develop a deep learning method that is suited to identify anomalies and provide an image quality assessment of solar full-disk H$\\alpha$ filtergrams. The approach is based on the structural appearance and the true image distribution" +"---\nabstract: 'An appealing feature of Network Function Virtualization (NFV) is that in an NFV-based network, a network function (NF) instance may be placed at any node. On the one hand this offers great flexibility in allocation of redundant instances, but on the other hand it makes the allocation a unique and difficult challenge. One particular concern is that there is inherent correlation among nodes due to the structure of the network, thus requiring special care in this allocation. To this aim, our novel approach, called *CoShare*, is proposed. Firstly, its design takes into consideration the effect of network structural dependency, which might result in the unavailability of nodes of a network after failure of a node. Secondly, to efficiently make use of resources, CoShare proposes the idea of *shared reservation*, where multiple flows may be allowed to share the same reserved backup capacity at an NF instance. Furthermore, CoShare factors in the heterogeneity in nodes, NF instances and availability requirements of flows in the design. The results from a number of experiments conducted using realistic network topologies show that the integration of structural dependency allows meeting availability requirements for more flows compared to a baseline approach. Specifically, CoShare is" +"---\nabstract: 'We introduce the notion of (half) 2-adjoint equivalences in Homotopy Type Theory and prove their expected properties. We formalized these results in the Lean Theorem Prover.'\naddress:\n- 'Department of Mathematics, University of Western Ontario'\n- 'Department of Combinatorics and Optimization, University of Waterloo'\n- 'Department of Mathematics, University of Western Ontario'\n- 'Department of Computer Science, University of Western Ontario'\nauthor:\n- Daniel Carranza\n- Jonathan Chang\n- Krzysztof Kapulkin\n- Ryan Sandford\nbibliography:\n- 'lmcs7007.bib'\ntitle: '2-adjoint equivalences in homotopy type theory'\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nThere are numerous notions of equivalence in homotopy type theory: bi-invertible maps, contractible maps, and half adjoint equivalences. Other natural choices, such as quasi-invertible maps and adjoint equivalences, while *logically* equivalent to the above, are not propositions, making them unsuitable to serve as *the* definition of an equivalence. One can use a simple semantical argument, which in essence comes down to analyzing subcomplexes of the nerve of the groupoid $(0 \\cong 1)$, to see why some definitions work and others do not. The conclusion here is that while the definition as a \u201chalf $n$-adjoint equivalence\u201d gives us a proposition, the definition as a \u201c(full) $n$-adjoint equivalence\u201d does not.\n\nIn" +"---\nabstract: 'We present the properties of the inverse Evershed flow (IEF) based on the center-to-limb variation of the plasma speed and loop geometry of chromospheric superpenumbral fibrils in eleven sunspots that were located at a wide range of heliocentric angles from 12$^\\circ$ to 79$^\\circ$. The observations were acquired at the Dunn Solar Telescope in the spectral lines of H$\\alpha$ at 656nm, IR at 854nm and at 1083nm. All sunspots display opposite line-of-sight (LOS) velocities on the limb and center side with a distinct shock signature near the outer penumbral edge. We developed a simplified flexible sunspot model assuming axisymmetry and prescribing the radial flow speed profile at a known loop geometry to replicate the observed two-dimensional IEF patterns under different viewing angles. The simulated flow maps match the observations for chromospheric loops with 10\u2013-20Mm length starting at 0.8\u20131.1 sunspot radii, an apex height of 2\u20133Mm and a true constant flow speed of 2-\u20139 kms$^{-1}$. We find on average a good agreement of the simulated velocities and the observations on elliptical annuli around the sunspot. Individual IEF channels show a significant range of variation in their properties and reach maximal LOS speeds of up to 12kms$^{-1}$. Upwards or downwards directed" +"---\nabstract: 'The quantum dilogarithm function of Faddeev is a special function that plays a key role as the building block of quantum invariants of knots and 3-manifolds, of quantum Teichm\u00fcller theory and of complex Chern\u2013Simons theory. Motivated by conjectures on resurgence and the recent interest in wall-crossing phenomena, we prove that the Borel summation of a formal power series solution of a linear difference equation produces Faddeev\u2019s quantum dilogarithm. Along the way, we give an explicit formula for the Borel transform, a meromorphic function in the Borel plane, locate its poles and residues and describe the Stokes phenomenon of its Laplace transforms along the Stokes rays.'\naddress:\n- |\n International Center for Mathematics, Department of Mathematics\\\n Southern University of Science and Technology\\\n Shenzhen, China \n- |\n Section de Math\u00e9matiques, Universit\u00e9 de Gen\u00e8ve\\\n 2-4 rue du Li\u00e8vre, Case Postale 64, 1211 Gen\u00e8ve 4, Switzerland \nauthor:\n- Stavros Garoufalidis\n- Rinat Kashaev\nbibliography:\n- 'biblio.bib'\ndate: '4 October, 2020'\ntitle: ' Resurgence of Faddeev\u2019s quantum dilogarithm'\n---\n\n[^1]\n\nIntroduction {#sec.intro}\n============\n\nA well-known problem in quantum topology is the Volume Conjecture which asserts that the Kashaev invariant of a hyperbolic knot grows exponentially at a rate proportional to the volume of" +"---\nabstract: 'Today\u2019s quantum computers are comprised of tens of qubits interacting with each other and the environment in increasingly complex networks. In order to achieve the best possible performance when operating such systems, it is necessary to have accurate knowledge of all parameters in the quantum computer Hamiltonian. In this article, we demonstrate theoretically and experimentally a method to efficiently learn the parameters of resonant interactions for quantum computers consisting of frequency-tunable superconducting qubits. Such interactions include, for example, those to other qubits, resonators, two-level state defects, or other unwanted modes. Our method is based on a significantly improved swap spectroscopy calibration and consists of an *offline* data collection algorithm, followed by an *online* Bayesian learning algorithm. The purpose of the offline algorithm is to detect and roughly estimate resonant interactions from a state of zero knowledge. It produces a square-root reduction in the number of measurements. The online algorithm subsequently refines the estimate of the parameters to comparable accuracy as traditional swap spectroscopy calibration, but in constant time. We perform an experiment implementing our technique with a superconducting qubit. By combining both algorithms, we observe a reduction of the calibration time by one order of magnitude. We believe" +"---\nabstract: |\n We construct completely integrable torus actions on the dual Lie algebra of any compact Lie group $K$ with respect to the standard Lie-Poisson structure. These systems generalize properties of Gelfand-Zeitlin systems for unitary and orthogonal Lie groups: 1) the pullback to any Hamiltonian $K$-manifold is an integrable torus action, 2) if the $K$-manifold is multiplicity free, then the torus action is *completely* integrable, and 3) the collective moment map has convexity and fiber connectedness properties. They also generalize the relationship between Gelfand-Zeitlin systems and canonical bases via geometric quantization by a real polarization.\n\n To construct these integrable systems, we generalize Harada and Kaveh\u2019s construction of integrable systems by toric degeneration to singular quasi-projective varieties. Under certain conditions, we show that the stratified-gradient Hamiltonian vector field of such a degeneration, which is defined piece-wise, has a flow whose limit exists and defines continuous degeneration map.\naddress:\n- Earth Species Project\n- 'Amazon.com Services LLC, New York, NY, USA'\nauthor:\n- Benjamin Hoffman\n- Jeremy Lane\nbibliography:\n- 'degenerationsBibliography.bib'\ndate:\n- \n- 'June 11, 2021'\ntitle: Stratified gradient Hamiltonian vector fields and collective integrable systems\n---\n\nIntroduction\n============\n\nOur work divides in two parts. Part I concerns the construction" +"---\nabstract: 'Assessing the resilience of a road network is instrumental to improve existing infrastructures and design new ones. Here we apply the optimal path crack model (OPC) to investigate the mobility of road networks and propose a new proxy for resilience of urban mobility. In contrast to static approaches, the OPC accounts for the dynamics of rerouting as a response to traffic jams. Precisely, one simulates a sequence of failures (cracks) at the most vulnerable segments of the optimal origin-destination paths that are capable to collapse the system. Our results with synthetic and real road networks reveal that their levels of disorder, fractions of unidirectional segments and spatial correlations can drastically affect the vulnerability to traffic congestion. By applying the OPC to downtown Boston and Manhattan, we found that Boston is significantly more vulnerable than Manhattan. This is compatible with the fact that Boston heads the list of American metropolitan areas with the highest average time waste in traffic. Moreover, our analysis discloses that the origin of this difference comes from the intrinsic spatial correlations of each road network. Finally, we argue that, due to their global influence, the most important cracks identified with OPC can be used to" +"---\nabstract: 'Spell check is a useful application which processes noisy human-generated text. Spell check for Chinese poses unresolved problems due to the large number of characters, the sparse distribution of errors, and the dearth of resources with sufficient coverage of heterogeneous and shifting error domains. For Chinese spell check, filtering using confusion sets narrows the search space and makes finding corrections easier. However, most, if not all, confusion sets used to date are fixed and thus do not include new, shifting error domains. We propose a scalable adaptable filter that exploits hierarchical character embeddings to (1) obviate the need to handcraft confusion sets, and (2) resolve sparsity problems related to infrequent errors. Our approach compares favorably with competitive baselines and obtains SOTA results on the 2014 and 2015 Chinese Spelling Check Bake-off datasets.'\nauthor:\n- 'Minh\u00a0Nguyen\u00a0, Gia\u00a0H.\u00a0Ngo, and Nancy\u00a0F.\u00a0Chen\u00a0[^1]'\nbibliography:\n- 'spell.bib'\n- 'nlp.bib'\ntitle: 'Domain-shift Conditioning using Adaptable Filtering via Hierarchical Embeddings for Robust Chinese Spell Check'\n---\n\n[Shell : Bare Demo of IEEEtran.cls for IEEE Journals]{}\n\nIntroduction {#sec:intro}\n============\n\ncheck is a common task in processing written text, as spell checkers are an integral component in text editors and search" +"---\nabstract: 'The recently developed Directional RElativistic Spectrum Simulator (DRESS) code has been validated for the first time against numerical calculations and experimental measurements performed on MAST. In this validation, the neutron emissivities and rates computed by DRESS are benchmarked against TRANSP/NUBEAM predictions while the neutron energy spectra provided by DRESS taking as input TRANSP/NUBEAM and ASCOT/BBNBI in Gyro-Orbit (GO) mode fast ion distributions are validated against proton pulse height spectra (PHS) measured by the neutron flux monitor. Excellent agreement was found between DRESS and TRANSP/NUBEAM predictions of local and total neutron emission.'\naddress:\n- '$^1$ Department of Physics and Astronomy, Uppsala University, SE-751 05 Uppsala, Sweden'\n- '$^2$ Princeton Plasma Physics Laboratory, Princeton, NJ 08543-0451, USA'\n- '$^3$ Department of Applied Physics, Aalto University, P.O. Box 11100, 00076 AALTO, Finland'\nauthor:\n- |\n A. Sperduti$^1$, I. Klimek$^1$, S. Conroy$^1$, M. Cecconello$^1$,\\\n M. Gorelenkova$^2$ and A. Snicker$^3$\nbibliography:\n- 'Bibliografia.bib'\ntitle: Validation of neutron emission and neutron energy spectrum calculations on MAST with DRESS\n---\n\nIntroduction\n============\n\nModelling of the neutron emission from the plasma can be used to assess the local and total plasma performances in terms of fast ions confinement and transport while modelling of the neutron energy" +"---\nbibliography:\n- 'Bibliography.bib'\n---\n\nDISCRETE CONVOLUTION STATISTIC FOR HYPOTHESIS TESTING\n\nGiulio Prevedello, p.giulio@hotmail.it\n\nKen R. Duffy, ken.duffy@mu.ie\n\nHamilton Institute\n\nMaynooth University\n\nMaynooth, Ireland\n\nKey Words: discrete convolution; sum of discrete random variables; statistical hypothesis testing; nonparametric maximum-likelihood estimation; sub-independence Mathematics Subject Classification: 62G05; 62G10; 62G20; 62P10; 62P20\n\nABSTRACT\n\nThe question of testing for equality in distribution between two linear models, each consisting of sums of distinct discrete independent random variables with unequal numbers of observations, has emerged from the biological research. In this case, the computation of classical $\\chi^2$ statistics, which would not include all observations, results in loss of power, especially when sample sizes are small. Here, as an alternative that uses all data, the nonparametric maximum likelihood estimator for the distribution of sum of discrete and independent random variables, which we call the convolution statistic, is proposed and its limiting normal covariance matrix determined. To challenge null hypotheses about the distribution of this sum, the generalized Wald\u2019s method is applied to define a testing statistic whose distribution is asymptotic to a $\\chi^2$ with as many degrees of freedom as the rank of such covariance matrix. Rank analysis also reveals a connection with the roots of the probability" +"---\nabstract: |\n This work reconsiders the concept of community-based trip sharing proposed by @hasan2018 that leverages the structure of commuting patterns and urban communities to optimize trip sharing. It aims at quantifying the benefits of autonomous vehicles for community-based trip sharing, compared to a car-pooling platform where vehicles are driven by their owners. In the considered problem, each rider specifies a desired arrival time for her inbound trip (commuting to work) and a departure time for her outbound trip (commuting back home). In addition, her commute time cannot deviate too much from the duration of a direct trip. Prior work motivated by reducing parking pressure and congestion in the city of Ann Arbor, Michigan, showed that a car-pooling platform for community-based trip sharing could reduce the number of vehicles by close to 60%.\n\n This paper studies the potential benefits of autonomous vehicles in further reducing the number of vehicles needed to serve all these commuting trips. It proposes a column-generation procedure that generates and assembles mini routes to serve inbound and outbound trips, using a lexicographic objective that first minimizes the required vehicle count and then the total travel distance. The optimization algorithm is evaluated on a large-scale, real-world" +"---\nauthor:\n- 'P.\u00a0Abratenko'\n- 'M.\u00a0Alrashed'\n- 'R.\u00a0An'\n- 'J.\u00a0Anthony'\n- 'J.\u00a0Asaadi'\n- 'A.\u00a0Ashkenazi'\n- 'S.\u00a0Balasubramanian'\n- 'B.\u00a0Baller'\n- 'C.\u00a0Barnes'\n- 'G.\u00a0Barr'\n- 'V.\u00a0Basque'\n- 'L.\u00a0Bathe-Peters'\n- 'O.\u00a0Benevides\u00a0Rodrigues'\n- 'S.\u00a0Berkman'\n- 'A.\u00a0Bhanderi'\n- 'A.\u00a0Bhat'\n- 'M.\u00a0Bishai'\n- 'A.\u00a0Blake'\n- 'T.\u00a0Bolton'\n- 'L.\u00a0Camilleri'\n- 'D.\u00a0Caratelli'\n- 'I.\u00a0Caro\u00a0Terrazas'\n- 'R.\u00a0Castillo\u00a0Fernandez'\n- 'F.\u00a0Cavanna'\n- 'G.\u00a0Cerati'\n- 'Y.\u00a0Chen'\n- 'E.\u00a0Church'\n- 'D.\u00a0Cianci'\n- 'E.\u00a0O.\u00a0Cohen'\n- 'J.\u00a0M.\u00a0Conrad'\n- 'M.\u00a0Convery'\n- 'L.\u00a0Cooper-Troendle'\n- 'J.\u00a0I.\u00a0Crespo-Anad\u00f3n'\n- 'M.\u00a0Del\u00a0Tutto'\n- 'D.\u00a0Devitt'\n- 'R.\u00a0Diurba'\n- 'L.\u00a0Domine'\n- 'R.\u00a0Dorrill'\n- 'K.\u00a0Duffy'\n- 'S.\u00a0Dytman'\n- 'B.\u00a0Eberly'\n- 'A.\u00a0Ereditato'\n- 'L.\u00a0Escudero\u00a0Sanchez'\n- 'J.\u00a0J.\u00a0Evans'\n- 'A.\u00a0A.\u00a0Fadeeva'\n- 'G.\u00a0A.\u00a0Fiorentini\u00a0Aguirre'\n- 'R.\u00a0S.\u00a0Fitzpatrick'\n- 'B.\u00a0T.\u00a0Fleming'\n- 'N.\u00a0Foppiani'\n- 'D.\u00a0Franco'\n- 'A.\u00a0P.\u00a0Furmanski'\n- 'D.\u00a0Garcia-Gamez'\n- 'S.\u00a0Gardiner'\n- 'S.\u00a0Gollapinni'\n- 'O.\u00a0Goodwin'\n- 'E.\u00a0Gramellini'\n- 'P.\u00a0Green'\n- 'H.\u00a0Greenlee'\n- 'L.\u00a0Gu'\n- 'W.\u00a0Gu'\n- 'R.\u00a0Guenette'" +"---\nabstract: 'This paper focuses on the regression of multiple 3D people from a single RGB image. Existing approaches predominantly follow a multi-stage pipeline that first detects people in bounding boxes and then independently regresses their 3D body meshes. In contrast, we propose to Regress all meshes in a One-stage fashion for Multiple 3D People (termed ROMP). The approach is conceptually simple, bounding box-free, and able to learn a per-pixel representation in an end-to-end manner. Our method simultaneously predicts a Body Center heatmap and a Mesh Parameter map, which can jointly describe the 3D body mesh on the pixel level. Through a body-center-guided sampling process, the body mesh parameters of all people in the image are easily extracted from the Mesh Parameter map. Equipped with such a fine-grained representation, our one-stage framework is free of the complex multi-stage process and more robust to occlusion. Compared with state-of-the-art methods, ROMP achieves superior performance on the challenging multi-person benchmarks, including 3DPW and CMU Panoptic. Experiments on crowded/occluded datasets demonstrate the robustness under various types of occlusion. The released code[^1] is the first real-time implementation of monocular multi-person 3D mesh regression.'\nauthor:\n- |\n Yu Sun$^1$[^2]Qian Bao$^2$ Wu Liu$^{2}$[^3] Yili Fu$^{1\\dagger}$Michael J. Black$^3$" +"---\nabstract: 'Newton-step approximations to pseudo maximum likelihood estimates of spatial autoregressive models with a large number of parameters are examined, in the sense that the parameter space grows slowly as a function of sample size. These have the same asymptotic efficiency properties as maximum likelihood under Gaussianity but are of closed form. Hence they are computationally simple and free from compactness assumptions, thereby avoiding two notorious pitfalls of implicitly defined estimates of large spatial autoregressions. When commencing from an initial least squares estimate, the Newton step can also lead to weaker regularity conditions for a central limit theorem than some extant in the literature. A simulation study demonstrates excellent finite sample gains from Newton iterations, especially in large multiparameter models for which grid search is costly. A small empirical illustration shows improvements in estimation precision with real data.'\nauthor:\n- 'Abhimanyu Gupta[^1] [^2]'\nbibliography:\n- 'thesisb.bib'\ntitle: 'Efficient closed-form estimation of large spatial autoregressions[^3]'\n---\n\n**Keywords:** Spatial autoregression, efficiency, many parameters, networks\n\n**JEL Classification:** C21, C31, C33, C36\n\nIntroduction\n============\n\nSpatial autoregressive (SAR) models, introduced by [@cliff1973spatial], are popular tools for modelling cross-sectionally dependent economic data. The pre-eminent feature of such models is the presence of one or more" +"---\nbibliography:\n- 'references.bib'\n---\n\n\u00a0\\\n[ 2-Group Global Symmetries and Anomalies\\\nin Six-Dimensional Quantum Field Theories]{}\n\n\u00a0\\\nClay C\u00f3rdova,$^1$ Thomas T.\u00a0Dumitrescu,$^2$ and Kenneth Intriligator$^3$\n\n\u00a0\\\n$^1$[*Kadanoff Center for Theoretical Physics & Enrico Fermi Institute, University of Chicago*]{}\\\n$^2$[*Mani L.Bhaumik Institute for Theoretical Physics, Department of Physics and Astronomy,*]{}\\\n[*University of California, Los Angeles, CA 90095, USA*]{}\\\n$^3$[*Department of Physics, University of California, San Diego*]{}\n\n\u00a0\\\n\nWe examine six-dimensional quantum field theories through the lens of higher-form global symmetries. Every Yang-Mills gauge theory in six dimensions, with field strength $f^{(2)}$, naturally gives rise to a continuous 1-form global symmetry associated with the 2-form instanton current $J^{(2)} \\sim * \\operatorname{\\mathrm{Tr}}\\left( f^{(2)} \\wedge f^{(2)}\\right)$. We show that suitable mixed anomalies involving the gauge field $f^{(2)}$ and ordinary 0-form global symmetries, such as flavor or Poincar\u00e9 symmetries, lead to continuous 2-group global symmetries, which allow two flavor currents or two stress tensors to fuse into the 2-form current $J^{(2)}$. We discuss several features of 2-group symmetry in six dimensions, many of which parallel the four-dimensional case. The majority of six-dimensional supersymmetric conformal field theories (SCFTs) and little string theories have infrared phases with non-abelian gauge fields. We show that the mixed anomalies" +"---\nabstract: 'We report the magnetic, electronic and transport properties of the quasi-skutterudite compound Pr$_3$Ir$_4$Ge$_{13}$ by means of magnetic susceptibility $\\chi(T)$, electrical resistivity $\\rho(T)$, specific heat $C_p(T)$, thermal conductivity $\\kappa(T)$, thermoelectric power $S(T)$ and Hall coefficient $R_\\mathrm{H}(T)$ measurements. Pr$_3$Ir$_4$Ge$_{13}$ does not show any phase transition down to 1.9 K. Magnetic, and specific measurements show that the system possesses a crystal electric field singlet ground state that is separated from the first excited state by about 37 K. $\\rho(T)$ shows a negative temperature coefficient of resistance for the whole temperature range studied and which can be explained in terms of Mott\u2019s impurity band conduction mechanism. $R_\\mathrm{H}(T)$ measurements show that Pr$_3$Ir$_4$Ge$_{13}$ is a low-carrier density semimetal and its transport properties indicate a metallic-non metallic cross over behaviour. Large Seebeck values were observed for the entire temperature range of investigation, and the analysis of temperature variation of $S$ and $S/T$ showed no sign of strong correlation between the Pr 4$f^2$ and conduction electron states near Fermi level. A large Sommerfeld coefficient, $\\gamma = 150$\u00a0mJ/(mol K$^2$) indicates the formation of a moderate heavy-fermion state emerging from the dynamical crystal field fluctuations.'\naddress:\n- 'Beijing National Laboratory for Condensed Matter Physics, Institute of Physics," +"---\nabstract: 'In this paper, we propose a non-orthogonal multiple access (NOMA)-based communication framework that allows machine type devices (MTDs) to access the network while avoiding congestion. The proposed technique is a 2-step mechanism that first employs fast uplink grant to schedule the devices without sending a request to the base station (BS). Secondly, NOMA pairing is employed in a distributed manner to reduce signaling overhead. Due to the limited capability of information gathering at the BS in massive scenarios, learning techniques are best fit for such problems. Therefore, multi-arm bandit learning is adopted to schedule the fast grant MTDs. Then, constrained random NOMA pairing is proposed that assists in decoupling the two main challenges of fast uplink grant schemes namely, active set prediction and optimal scheduling. Using NOMA, we were able to significantly reduce the resource wastage due to prediction errors. Additionally, the results show that the proposed scheme can easily attain the impractical optimal OMA performance, in terms of the achievable rewards, at an affordable complexity.'\nauthor:\n- \ntitle: 'Fast Grant Learning-Based Approach for Machine Type Communications with NOMA'\n---\n\nIoT, MTC, congestion control, NOMA.\n\nIntroduction\n============\n\nMassive machine type communications (mMTC) and ultra-reliable low-latency communications (URLLC) are" +"---\nabstract: 'We propose a novel optimization framework to predict clinical severity from resting state fMRI (rs-fMRI) data. Our model consists of two coupled terms. The first term decomposes the correlation matrices into a sparse set of representative subnetworks that define a network manifold. These subnetworks are modeled as rank-one outer-products which correspond to the elemental patterns of co-activation across the brain; the subnetworks are combined via patient-specific non-negative coefficients. The second term is a linear regression model that uses the patient-specific coefficients to predict a measure of clinical severity. We validate our framework on two separate datasets in a ten fold cross validation setting. The first is a cohort of fifty-eight patients diagnosed with Autism Spectrum Disorder (ASD). The second dataset consists of sixty three patients from a publicly available ASD database. Our method outperforms standard semi-supervised frameworks, which employ conventional graph theoretic and statistical representation learning techniques to relate the rs-fMRI correlations to behavior. In contrast, our joint network optimization framework exploits the structure of the rs-fMRI correlation matrices to simultaneously capture group level effects and patient heterogeneity. Finally, we demonstrate that our proposed framework robustly identifies clinically relevant networks characteristic of ASD.'\naddress:\n- 'Department of Electrical" +"---\nauthor:\n- 'Sven Gedicke, Annika Bonerath, Benjamin Niedermann, and Jan-Henrik Haunert'\nbibliography:\n- 'strings.bib'\n- 'references.bib'\ntitle: 'Zoomless Maps: External Labeling Methods for the Interactive Exploration of Dense Point Sets at a Fixed Map Scale'\n---\n\nIn the last years devices such as smartphones and smartwatches have conquered our daily life and have made digital maps available at any time. However, due to the small screen sizes, the presentation of spatial information on such devices demands the development of new and innovative visualization techniques. As an example, take a digital map that shows the results of a query for restaurants in the near surroundings of the user; see\u00a0. Desktop systems and tablets typically offer enough space to place labels for an appropriately large selection of restaurants still preserving the legibility of the background map. In contrast, for small-screen devices\u2014especially for smartwatches\u2014this is hardly possible as the screen may take only few labels without covering the map too much. The challenges posed by limited space for label placement are often softened by interactive map operations such as panning and zooming. They provide the user with the possibility of exploring the map by digging into its details. Hence, when there" +"---\nabstract: 'In this paper, we propose an automatic approach for localizing the inner eye canthus in thermal face images. We first coarsely detect 5 facial keypoints corresponding to the center of the eyes, the nosetip and the ears. Then we compute a sparse 2D-3D points correspondence using a 3D Morphable Face Model (3DMM). This correspondence is used to project the entire 3D face onto the image, and subsequently locate the inner eye canthus. Detecting this location allows to obtain the most precise body temperature measurement for a person using a thermal camera. We evaluated the approach on a thermal face dataset provided with manually annotated landmarks. However, such manual annotations are normally conceived to identify facial parts such as eyes, nose and mouth, and are not specifically tailored for localizing the eye canthus region. As additional contribution, we enrich the original dataset by using the annotated landmarks to deform and project the 3DMM onto the images. Then, by manually selecting a small region corresponding to the eye canthus, we enrich the dataset with additional annotations. By using the manual landmarks, we ensure the correctness of the 3DMM projection, which can be used as ground-truth for future evaluations. Moreover, we" +"---\nabstract: 'The Aubry transition between sliding and pinned phases, driven by the competition between two incommensurate length scales, represents a paradigm that is applicable to a large variety of microscopically distinct systems. Despite previous theoretical studies, it remains an open question to what extent quantum effects modify the transition, or are experimentally observable. An experimental platform that can potentially reach the quantum regime has recently become available in the form of trapped laser-cooled ions subject to a periodic optical potential\u00a0[@Bylinskii2016]. Using Path-Integral Monte Carlo (PIMC) simulation methods, we analyze the impact of quantum tunneling on the sliding-to-pinned transition in this system, and determine the phase diagram in terms of incommensuration and potential strength. We propose new signatures of the quantum Aubry transition that are robust against thermal and finite-size effects, and that can be observed in future experiments.'\nauthor:\n- Pietro Maria Bonetti\n- Andrea Rucci\n- Maria Luisa Chiofalo\n- Vladan Vuleti\u0107\ntitle: Quantum Effects in the Aubry Transition\n---\n\nIntroduction\n============\n\nAn intriguing feature of quantum many-particle systems is that they may undergo phase transitions even at zero temperature, driven by the competition between kinetic and interaction energies across a critical point\u00a0[@Sachdev]. While the transition" +"---\nabstract: 'The broad adoption of Electronic Health Records (EHR) has led to vast amounts of data being accumulated on a patient\u2019s history, diagnosis, prescriptions, and lab tests. Advances in recommender technologies have the potential to utilize this information to help doctors personalize the prescribed medications. In this work, we design a two-stage attention-based personalized medication recommender system called PREMIER which incorporates information from the EHR to suggest a set of medications. Our system takes into account the interactions among drugs in order to minimize the adverse effects for the patient. We utilize the various attention weights in the system to compute the contributions from the information sources for the recommended medications. Experiment results on MIMIC-III and a proprietary outpatient dataset show that PREMIER outperforms state-of-the-art medication recommendation systems while achieving the best tradeoff between accuracy and drug-drug interaction. Two case studies are also presented demonstrating that the justifications provided by PREMIER are appropriate and aligned to clinical practices.'\nauthor:\n- Suman Bhoi\n- Lee Mong Li\n- Wynne Hsu\nbibliography:\n- 'bibliography.bib'\ntitle: 'PREMIER: Personalized REcommendation for Medical prescrIptions from Electronic Records'\n---\n\n<ccs2012> <concept> <concept\\_id>10010520.10010553.10010562</concept\\_id> <concept\\_desc>Computer systems organization\u00a0Embedded systems</concept\\_desc> <concept\\_significance>500</concept\\_significance> </concept> <concept> <concept\\_id>10010520.10010575.10010755</concept\\_id> <concept\\_desc>Computer systems organization\u00a0Redundancy</concept\\_desc>" +"---\nabstract: 'We present a large-scale survey of the central molecular zone (CMZ) of our Galaxy, as well as a monitoring program of Sgr A\\*, with the AzTEC/Large Millimeter Telescope (LMT) in the 1.1 mm continuum. Our 1.1 mm map covers the main body of the CMZ over a field of $1.6 \\times 1.1$ deg$^2$ with an angular resolution of $10.5''''$ and a depth of 15 mJy/beam. To account for the intensity loss due to the background removal process, we combine this map with lower resolution CSO/Bolocam and *Planck*/HFI data to produce an effective full intensity 1.1 mm continuum map. With this map and existing *Herschel* surveys, we have carried out a comprehensive analysis of the spectral energy distribution (SED) of dust in the CMZ. A key component of this analysis is the implementation of a model-based deconvolution approach, incorporating the Point Spread Functions (PSFs) of the different instruments, and hence recovering a significant amount of spatial information on angular scales larger than $10.5''''$. The monitoring of Sgr A\\* was carried out as part of a worldwide, multi-wavelength campaign when the so-called G2 object was undergoing the pericenter passage around the massive black hole (MBH). Our preliminary results include 1)" +"---\nabstract: 'This paper proposes a centralized multi-vehicle coordination scheme serving unsignalized intersections. The whole process consists of three stages: a) target velocity optimization: formulate the collision-free vehicle coordination as a Mixed Integer Linear Programming (MILP) problem, with each incoming lane representing an independent variable; b) dynamic vehicle selection: build a directed graph with result of the optimization, and reserve only some of the vehicle nodes to coordinate by applying a subset extraction algorithm; c) synchronous velocity profile planning: bridge the gap between current speed and optimal velocity in a synchronous manner. The problem size is essentially bounded by number of lanes instead of vehicles. Thus the optimization process is real-time with guaranteed solution quality. Simulation has verified efficiency and real-time performance of the scheme.'\nauthor:\n- 'Qiang Ge, Qi Sun, Zhen Wang, Shengbo Eben Li, Ziqing Gu and Sifa Zheng[^1][^2]'\ntitle: |\n **Centralized Coordination of Connected Vehicles at Intersections\\\n using Graphical Mixed Integer Optimization\\*** \n---\n\nIntroduction\n============\n\nIntersection capacity is a limit to transportation efficiency. A jammed crossing will deteriorate safety, efficiency, gas emission, as well as passengers\u2019 experience (because of frequently stop-and-go operations). Much effort has been contributed to this field due to fore-mentioned reasons.\n\nResearches on vehicles" +"---\nabstract: |\n Intelligent assistants that follow commands or answer simple questions, such as Siri and Google search, are among the most economically important applications of AI. Future conversational AI assistants promise even greater capabilities and a better user experience through a deeper understanding of the domain, the user, or the user\u2019s purposes. But what domain and what methods are best suited to researching and realizing this promise? In this article we argue for the domain of\u00a0*voice document editing* and for the methods of\u00a0*model-based reinforcement learning*. The primary advantages of voice document editing are that the domain is tightly scoped and that it provides something for the conversation to be about (the document) that is delimited and fully accessible to the intelligent assistant. The advantages of reinforcement learning in general are that its methods are designed to learn from interaction without explicit instruction and that it formalizes the purposes of the assistant. Model-based reinforcement learning is needed in order to genuinely understand the domain of discourse and thereby work efficiently with the user to achieve their goals. Together, voice document editing and model-based reinforcement learning comprise a promising research direction for achieving conversational AI.\n\n **Keywords**: conversational AI, intelligent" +"---\nabstract: 'We initiate a systematic implementation of the spectral domain decomposition technique with the Galerkin-Collocation (GC) method in situations of interest such as the spherical collapse of a scalar field in the characteristic formulation. We discuss the transmission conditions at the interface of contiguous subdomains that are crucial for the domain decomposition technique for hyperbolic problems. We implemented codes with an arbitrary number of subdomains, and after validating them, we applied to the problem of critical collapse. With a modest resolution, we obtain the Choptuik\u2019s scaling law and its oscillatory component due to the discrete self-similarity of the critical solution.'\nauthor:\n- 'M. A. Alcoforado$^{1}$'\n- 'W. O. Barreto$^{1,\\,2}$'\n- 'H. P. de Oliveira$^{1}$'\nsubtitle: characteristic spherical collapse of scalar fields\ntitle: 'Multidomain Galerkin-Collocation method:'\n---\n\nIntroduction\n============\n\nSpectral methods have become popular in numerical relativity with applications in a large variety of problems [@grand_novak]. It is well known that spectral methods are global methods characterized by expansion of high order polynomial approximations that provide highly accurate solutions exhibiting exponential convergence for smooth functions with moderate computational resources. However, in general, the accuracy is spoiled in the case of complex geometries, or for the situation in which the solutions" +"---\nabstract: 'We present an automatic piano transcription system that converts polyphonic audio recordings into musical scores. This has been a long-standing problem of music information processing, and recent studies have made remarkable progress in the two main component techniques: multipitch detection and rhythm quantization. Given this situation, we study a method integrating deep-neural-network-based multipitch detection and statistical-model-based rhythm quantization. In the first part, we conducted systematic evaluations and found that while the present method achieved high transcription accuracies at the note level, some global characteristics of music, such as tempo scale, metre (time signature), and bar line positions, were often incorrectly estimated. In the second part, we formulated non-local statistics of pitch and rhythmic contents that are derived from musical knowledge and studied their effects in inferring those global characteristics. We found that these statistics are markedly effective for improving the transcription results and that their optimal combination includes statistics obtained from separated hand parts. The integrated method had an overall transcription error rate of $7.1\\%$ and a downbeat F-measure of $85.6\\%$ on a dataset of popular piano music, and the generated transcriptions can be partially used for music performance and assisting human transcribers, thus demonstrating the potential for" +"---\nabstract: 'We study patterns observed right after the loss of stability of mixing in the Kuramoto model of coupled phase oscillators with random intrinsic frequencies on large graphs, which can also be random. We show that the emergent patterns are formed via two independent mechanisms determined by the shape of the frequency distribution and the limiting structure of the underlying graph sequence. Specifically, we identify two nested eigenvalue problems whose eigenvectors (unstable modes) determine the structure of the nascent patterns. The analysis is illustrated with the results of the numerical experiments with the Kuramoto model with unimodal and bimodal frequency distributions on certain graphs.'\nauthor:\n- 'Hayato Chiba,[^1] Georgi S. Medvedev,[^2] and Matthew S. Mizuhara[^3]'\ntitle: ' Instability of mixing in the Kuramoto model: From bifurcations to patterns'\n---\n\nIntroduction\n============\n\nModels of interacting dynamical systems come up in different areas of science and technology. Modern applications ranging from neuroscience to power grids emphasize models with spatially structured interactions defined by graphs. Identifying dynamical mechanisms underlying pattern formation in such networks is an interesting problem with many important applications. In this paper, we study patterns emerging near the loss of stability of mixing in the Kuramoto model (KM) with" +"---\nabstract: 'The objective of this study is to understand the dynamics of freely evolving particle suspensions over a wide range of particle-to-fluid density ratios. The dynamics of particle suspensions are characterized by the average momentum equation, where the dominant contribution to the average momentum transfer between particles and fluid is the average drag force. In this study, the average drag force is quantified using particle-resolved direct numerical simulation in a canonical problem: a statistically homogeneous suspension where an imposed mean pressure gradient establishes a steady mean slip velocity between the phases. The effects of particle velocity fluctuations, particle clustering, and mobility of particles are studied separately. It is shown that the competing effects of these factors could decrease, increase, or keep constant the drag of freely evolving suspensions in comparison to fixed beds at different flow conditions. It is also shown that the effects of particle clustering and particle velocity fluctuations are not independent. Finally, a correlation for interphase drag force in terms of volume fraction, Reynolds number, and density ratio is proposed. Two different approaches (symbolic regression and predefined functional forms) are used to develop the drag correlation. Since this drag correlation has been inferred from simulations of" +"---\nauthor:\n- |\n Po-Ming Law [^1]\\\n Georgia Institute of Technology\n- |\n Alex Endert [^2]\\\n Georgia Institute of Technology\n- |\n John Stasko [^3]\\\n Georgia Institute of Technology\nbibliography:\n- 'template.bib'\nnocite: '[@dive]'\ntitle: Characterizing Automated Data Insights\n---\n\nProviding insight has been recognized as a main goal of visualization\u00a0[@quote]. However, gleaning data insights from visualization is a non-trivial task that requires domain knowledge, analysis expertise, and visualization literary. To facilitate insight generation, some researchers have created systems that automatically communicate data insights to users\u00a0[@seedb; @quickInsights; @datashot]. For instance, Quick Insights in Power BI\u00a0[@quickInsights] suggests prominent trends and patterns within a data set that are presented as charts along with textual descriptions (Fig.\u00a0\\[powerBI\\]).\n\nDevelopers of these systems often use the term \u201cinsight\u201d to refer to the automatically-extracted information (e.g., Quick Insights\u00a0[@quickInsights] and Automated Insights\u00a0[@autoInsight]). However, \u201cinsight\u201d is an overloaded term that has been applied from multiple perspectives in the visualization community\u00a0[@newPaper]. In the seminal work about insight-based evaluation, Saraiya et al.\u00a0[@insightEval] regard insights as data findings. On the other hand, Sacha et al.\u00a0[@knowledge] consider insights a product resulting from evaluating data findings with domain knowledge. Lacking a clarification of what" +"---\nabstract: 'Even though the Standard Model (SM) has achieved great success, its application to the field of low energies still lacks solid foundation due to our limited knowledge on non-perturbative QCD. Practically, all theoretical calculations of the hadronic transition matrix elements are based various phenomenological models. There indeed exist some anomalies in the field which are waiting for interpretations. The goal of this work is trying to solve one of the anomalies: the discrepancy between the theoretical prediction on the sign of the up-down asymmetry parameter of $\\Lambda_c\\to\\Sigma\\pi$ and the experimental measurement. In the literatures several authors calculated the rate and determined the asymmetry parameter within various schemes, but there exist obvious loopholes in those adopted scenarios. To solve the discrepancy between theory and data, we suggest that not only the direct transition process contributes to the observed $\\Lambda_c\\to\\Sigma\\pi$, but also other portals such as $\\Lambda_c\\to \\Lambda\\rho$ also play a substantial role via an isospin-conserving re-scattering $\\Lambda\\rho\\to\\Sigma\\pi$. Taking into account of the effects induced by the final state interaction, we re-evaluate the relevant quantities. Our numerical results indicate that the new theoretical prediction based on this scenario involving an interference between the direct transition of $\\Lambda_c\\to\\Sigma\\pi$ and the portal" +"---\nabstract: 'The Orthogonal Frequency Division Multiplexing (OFDM) is one of the most widely adopted schemes in wireless technologies such as Wi-Fi and LTE due to its high transmission rates, and the robustness against Intersymbol Interference (ISI). However, OFDM is highly sensitive to synchronism errors, which affects the orthogonality of the carriers. We analyzed several synchronization algorithms based on the correlation of the preamble symbols through the implementation in Software-Defined Radio (SDR) using the Universal Software Radio Peripheral (USRP). Such an implementation was performed in three stages: frame detection, comparing the autocorrelation output and the average power of the received signal; time synchronism, where the cross-correlation based on the short and long preamble symbols was implemented; and the frequency synchronism, where the Carrier Frequency Offset (CFO) added by the channel was detected and corrected. The synchronizer performance was verified through the USRP implementation. The results serve as a practical guide to selecting the optimal synchronism scheme and show the versatility of the USRP to implement digital communication systems efficiently.'\nauthor:\n- \nbibliography:\n- 'IEEEabrv.bib'\n- 'ConfAbrv.bib'\n- 'GP4.bib'\ntitle: 'Design and implementation in USRP of a preamble-based synchronizer for OFDM systems'\n---\n\nOFDM, synchronization, USRP, SDR, IEEE 802.11.\n\nIntroduction\n============" +"---\nabstract: '\\[sec:1\\] Apache introduced YARN as the next generation of the Hadoop framework, providing resource management and a central platform to deliver consistent data governance tools across Hadoop clusters. Hadoop YARN supports multiple frameworks like MapReduce to process different types of data and works with different scheduling policies such as FIFO, Capacity, and Fair schedulers. DRF is the best option that uses short-term, without considering history information, convergence to fairness for multi-type resource allocation. However, DRF performance is still not satisfying due to trade-offs between fairness and performance regarding resource utilization. To address this problem, we propose Simulated Annealing Fair scheduling, SAF, a long-term fair scheme in resource allocation to have fairness and excellent performance in terms of resource utilization and MakeSpan. We introduce a new parameter as entropy, which is an approach to indicates the disorder in the fairness of allocated resources of the whole cluster. We implemented SAF as a pluggable scheduler in Hadoop Yarn Cluster and evaluated it with standard MapReduce benchmarks in Yarn Scheduler Load Simulator (SLS) and CloudSim Plus simulation framework. Finally, the results of both simulation tools are evidence to prove our claim. Compared to DRF, SAF increases resource utilization of YARN clusters" +"---\nabstract: 'Apart from discriminative models for classification and object detection tasks, the application of deep convolutional neural networks to basic research utilizing natural imaging data has been somewhat limited; particularly in cases where a set of interpretable features for downstream analysis is needed, a key requirement for many scientific investigations. We present an algorithm and training paradigm designed specifically to address this: decontextualized hierarchical representation learning (DHRL). By combining a generative model chaining procedure with a ladder network architecture and latent space regularization for inference, DHRL address the limitations of small datasets and encourages a disentangled set of hierarchically organized features. In addition to providing a tractable path for analyzing complex hierarchal patterns using variation inference, this approach is generative and can be directly combined with empirical and theoretical approaches. To highlight the extensibility and usefulness of DHRL, we demonstrate this method in application to a question from evolutionary biology.'\nauthor:\n- 'R. Ian Etheredge'\n- Manfred Schartl\n- Alex Jordan\nbibliography:\n- 'references.bib'\ntitle: Decontextualized learning for interpretable hierarchical representations of visual patterns\n---\n\nIntroduction\n============\n\nThe application of deep convolutional neural networks (CNNs [@LeCun2010]) to supervised tasks is quickly becoming ubiquitous, even outside of standardized visual classification" +"---\nabstract: 'Image quality of PET reconstructions is degraded by subject motion occurring during the acquisition. MR-based motion correction approaches have been studied for PET/MR scanners and have been successful at capturing regular motion patterns, when used in conjunction with surrogate signals (e.g. navigators) to detect motion. However, handling irregular respiratory motion and bulk motion remains challenging. In this work, we propose an MR-based motion correction method relying on subspace-based real-time MR imaging to estimate motion fields used to correct PET reconstructions. We take advantage of the low-rank characteristics of dynamic MR images to reconstruct high-resolution MR images at high frame rates from highly undersampled k-space data. Reconstructed dynamic MR images are used to determine motion phases for PET reconstruction and estimate phase-to-phase nonrigid motion fields able to capture complex motion patterns such as irregular respiratory and bulk motion. MR-derived binning and motion fields are used for PET reconstruction to generate motion-corrected PET images. The proposed method was evaluated on in vivo data with irregular motion patterns. MR reconstructions accurately captured motion, outperforming state-of-the-art dynamic MR reconstruction techniques. Evaluation of PET reconstructions demonstrated the benefits of the proposed method over standard methods in terms of motion artifact reduction. The proposed" +"---\nabstract: 'The number density and correlation function of galaxies are two key quantities to characterize the distribution of the observed galaxy population. High-$z$ spectroscopic surveys, which usually involve complex target selection and are incomplete in redshift sampling, present both opportunities and challenges to measure these quantities reliably in the high-$z$ Universe. Using realistic mock catalogs we show that target selection and redshift incompleteness can lead to significantly biased results, especially due to the flux limit selection criteria. We develop a new method to correct the flux limit effect, using information provided by the parent photometric data from which the spectroscopic sample is constructed. Our tests using realistic mock samples show that the method is able to reproduce the true stellar mass function and correlation function reliably. Mock catalogs are constructed for the existing zCOSMOS and VIPERS surveys, as well as for the forthcoming PFS galaxy evolution survey. The same set of mock samples are used to quantify the total variance expected for different sample sizes. We find that the total variance decreases very slowly when the survey area reaches about 4 deg$^2$ for the abundance and about 8 deg$^2$ for the clustering, indicating that the cosmic variance is no" +"---\nabstract: 'We derive a new explicit formula in terms of sums over graphs for the $n$-point correlation functions of general formal weighted double Hurwitz numbers coming from the Kadomtsev\u2013Petviashvili tau functions of hypergeometric type (also known as Orlov\u2013Scherbin partition functions). Notably, we use the change of variables suggested by the associated spectral curve, and our formula turns out to be a polynomial expression in a certain small set of formal functions defined on the spectral curve.'\naddress:\n- 'B.\u00a0B.: Faculty of Mathematics, National Research University Higher School of Economics, Usacheva 6, 119048 Moscow, Russia; and Center of Integrable Systems, P.G. Demidov Yaroslavl State University, Sovetskaya 14, 150003,Yaroslavl, Russia'\n- 'P.\u00a0D.-B.: Faculty of Mathematics, National Research University Higher School of Economics, Usacheva 6, 119048 Moscow, Russia; HSE\u2013Skoltech International Laboratory of Representation Theory and Mathematical Physics, Skoltech, Nobelya 1, 143026, Moscow, Russia; and ITEP, 117218 Moscow, Russia'\n- 'M.\u00a0K.: Faculty of Mathematics, National Research University Higher School of Economics, Usacheva 6, 119048 Moscow, Russia; and Center for Advanced Studies, Skoltech, Nobelya 1, 143026, Moscow, Russia'\n- 'S.\u00a0S.: Korteweg-de Vries Institute for Mathematics, University of Amsterdam, Postbus 94248, 1090 GE Amsterdam, The Netherlands'\nauthor:\n- Boris\u00a0Bychkov\n-" +"---\nabstract: 'Recently, many zero-shot learning (ZSL) methods focused on learning discriminative object features in an embedding feature space, however, the distributions of the unseen-class features learned by these methods are prone to be partly overlapped, resulting in inaccurate object recognition. Addressing this problem, we propose a novel adversarial network to synthesize compact semantic visual features for ZSL, consisting of a residual generator, a prototype predictor, and a discriminator. The residual generator is to generate the visual feature residual, which is integrated with a visual prototype predicted via the prototype predictor for synthesizing the visual feature. The discriminator is to distinguish the synthetic visual features from the real ones extracted from an existing categorization CNN. Since the generated residuals are generally numerically much smaller than the distances among all the prototypes, the distributions of the unseen-class features synthesized by the proposed network are less overlapped. In addition, considering that the visual features from categorization CNNs are generally inconsistent with their semantic features, a simple feature selection strategy is introduced for extracting more compact semantic visual features. Extensive experimental results on six benchmark datasets demonstrate that our method could achieve a significantly better performance than existing state-of-the-art methods by $\\sim$$1.2$-$13.2\\%$ in" +"---\nabstract: 'Dynamic Translation (DT) is a sophisticated technique that allows the implementation of high-performance emulators and high-level-language virtual machines. In this technique, the guest code is compiled dynamically at runtime. Consequently, achieving good performance depends on several design decisions, including the shape of the regions of code being translated. Researchers and engineers explore these decisions to bring the best performance possible. However, a real DT engine is a very sophisticated piece of software, and modifying one is a hard and demanding task. Hence, we propose using simulation to evaluate the impact of design decisions on dynamic translators and present RAIn, an open-source DT simulator that facilitates the test of DT\u2019s design decisions, such as Region Formation Techniques (RFTs). RAIn outputs several statistics that support the analysis of how design decisions may affect the behavior and the performance of a real DT. We validated RAIn running a set of experiments with six well known RFTs (NET, MRET2, LEI, NETPlus, NET-R, and NETPlus-e-r) and showed that it can reproduce well-known results from the literature without the effort of implementing them on a real and complex dynamic translator engine.'\nauthor:\n- 'Vanderson M. do Rosario$^1$'\n- 'Raphael Zinsly$^{1,2}$'\n- Sandro Rigo$^1$\n-" +"---\nabstract: 'Convergence detection of iterative stochastic optimization methods is of great practical interest. This paper considers stochastic gradient descent (SGD) with a constant learning rate and momentum. We show that there exists a transient phase in which iterates move towards a region of interest, and a stationary phase in which iterates remain bounded in that region around a minimum point. We construct a statistical diagnostic test for convergence to the stationary phase using the inner product between successive gradients and demonstrate that the proposed diagnostic works well. We theoretically and empirically characterize how momentum can affect the test statistic of the diagnostic, and how the test statistic captures a relatively sparse signal within the gradients in convergence. Finally, we demonstrate an application to automatically tune the learning rate by reducing it each time stationarity is detected, and show the procedure is robust to mis-specified initial rates.'\nauthor:\n- \nbibliography:\n- 'standard.bib'\ntitle: Understanding and Detecting Convergence for Stochastic Gradient Descent with Momentum\n---\n\nIntroduction\n============\n\nConsider the problem in stochastic optimization $$\\begin{aligned}\n\\label{eq:stoch_opt}\n\\theta_\\star &= \\arg \\min_{\\theta \\in \\Theta} {\\mathbb{E}[ \\ell ( \\theta, \\xi ) ]}.\\end{aligned}$$ The loss $\\ell$ is parameterized by $\\Theta \\subseteq \\mathbb{R}^p$, and $\\xi$ is a" +"---\nabstract: 'We present a measurement of the rate of correlated neutron captures in the WATCHBOY detector, deployed at a depth of approximately 390 meters water equivalent (m.w.e.) in the Kimballton Underground Research Facility (KURF). WATCHBOY consists of a cylindrical 2 ton water target doped with 0.1% gadolinium, surrounded by a 40 ton undoped water hermetic shield. We present a comparison of our results with the expected rate of correlated neutron captures arising from high-energy neutrons incident on the outside of the WATCHBOY shield, predicted by a hybrid FLUKA/GEANT4-based simulation. The incident neutron energy distribution used in the simulation was measured by a fast neutron spectrometer, the 1.8-ton Multiplicity and Recoil Spectrometer (MARS) detector, at the same depth. We find that the measured detection rate of two correlated neutrons is consistent with that predicted by simulation. The result lends additional confidence in the detection technique used by MARS, and therefore in the MARS spectra as measured at three different depths. Confirmation of the fast neutron flux and spectrum is important as it helps validate the scaling models used to predict the fast neutron fluxes at different overburdens.'\nauthor:\n- 'F. Sutanto'\n- 'O.A. Akindele'\n- 'M. Askins'\n- 'M. Bergevin'" +"---\nabstract: 'The state-of-the-art deep learning methods have demonstrated impressive performance in segmentation tasks. However, the success of these methods depends on a large amount of manually labeled masks, which are expensive and time-consuming to be collected. In this work, a novel Consistent PerceptionGenerative Adversarial Network (CPGAN) is proposed for semi-supervised stroke lesion segmentation. The proposed CPGAN can reduce the reliance on fully labeled samples. Specifically, A similarity connection module (SCM) is designed to capture the information of multi-scale features. The proposed SCM can selectively aggregate the features at each position by a weighted sum. Moreover, a consistent perception strategy is introduced into the proposed model to enhance the effect of brain stroke lesion prediction for the unlabeled data. Furthermore, an assistant network is constructed to encourage the discriminator to learn meaningful feature representations which are often forgotten during training stage. The assistant network and the discriminator are employed to jointly decide whether the segmentation results are real or fake. The CPGAN was evaluated on the Anatomical Tracings of Lesions After Stroke (ATLAS). The experimental results demonstrate that the proposed network achieves superior segmentation performance. In semi-supervised segmentation task, the proposed CPGAN using only two-fifths of labeled samples outperforms some" +"---\nabstract: 'Several concepts have been brought forward to determine where terrestrial planets are likely to remain habitable in multi-stellar environments. Isophote-based habitable zones, for instance, rely on insolation geometry to predict habitability, whereas Radiative Habitable Zones take the orbital motion of a potentially habitable planet into account. Dynamically Informed Habitable Zones include gravitational perturbations on planetary orbits, and full scale, self consistent simulations promise detailed insights into the evolution of select terrestrial worlds. All of the above approaches agree that stellar multiplicity does not preclude habitability. Predictions on where to look for habitable worlds in such environments can differ between concepts. The aim of this article is to provide an overview of current approaches and present simple analytic estimates for the various types of habitable zones in binary star systems.'\nauthor:\n- |\n Siegfried Eggl,\\\n Rubin Observatory /\\\n Department of Astronomy,\\\n University of Washington,\\\n Seattle, 98015 WA, USA\\\n `eggl@uw.edu` Nikolaos Georgakarakos\\\n Division of Science,\\\n New York University Abu Dhabi,\\\n Abu Dhabi, P.O. BOX 129188, UAE\\\n `ng53@nyu.edu`Elke Pilat-Lohinger\\\n Institute for Astrophysics,\\\n University of Vienna,\\\n 1180 Vienna, Austria\\\n `elke.pilat-lohinger@univie.ac.at`\nbibliography:\n- 'bhz.bib'\ntitle: 'Habitable Zones in Binary Star Systems: A Zoology'\n---\n\nIntroduction\n============\n\nDefining habitable zones in binary star systems" +"---\nabstract: 'Based on in-situ measurements by Wind spacecraft from 2005 to 2015, this letter reports for the first time a clearly scale-dependent connection between proton temperatures and the turbulence in the solar wind. A statistical analysis of proton-scale turbulence shows that increasing helicity magnitudes correspond to steeper magnetic energy spectra. In particular, there exists a positive power-law correlation (with a slope $\\sim 0.4$) between the proton perpendicular temperature and the turbulent magnetic energy at scales $0.3 \\lesssim k\\rho_p \\lesssim 1$, with $k$ being the wavenumber and $\\rho_p$ being the proton gyroradius. These findings present evidence of solar wind heating by the proton-scale turbulence. They also provide insight and observational constraint on the physics of turbulent dissipation in the solar wind.'\ntitle: 'Observational evidence for solar wind proton heating by ion-scale turbulence'\n---\n\nNearly collisionless solar wind turbulence at ion scales is investigated with 11 years of in-situ data\n\nCorrelations between the spectral index and magnetic helicity, and between the proton temperature and turbulent energy are revealed\n\nA scenario for the solar wind turbulence and heating at ion scales is proposed\n\nPlain Language Summary {#plain-language-summary .unnumbered}\n======================\n\nThe solar wind is a tenuous magnetized plasma that serves as a natural" +"---\nabstract: 'Given its briefness and predictability, the minimal seesaw \u2014 a simplified version of the canonical seesaw mechanism with only two right-handed neutrino fields \u2014 has been studied in depth and from many perspectives, and now it is being pushed close to a position of directly facing experimental tests. This article is intended to provide an up-to-date review of various phenomenological aspects of the minimal seesaw and its associated leptogenesis mechanism in neutrino physics and cosmology. Our focus is on possible flavor structures of such benchmark seesaw and leptogenesis scenarios and confronting their predictions with current neutrino oscillation data and cosmological observations. In this connection particular attention will be paid to the topics of lepton number violation, lepton flavor violation, discrete flavor symmetries, CP violation and antimatter of the Universe.'\naddress: |\n $^1$Institute of High Energy Physics and School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China\\\n $^2$Department of Physics, Liaoning Normal University, Dalian 116029, China\nauthor:\n- 'Zhi-zhong Xing$^{1}$ [and]{} Zhen-hua Zhao$^{2}$[^1]'\nbibliography:\n- 'iopart-num.bib'\ntitle: The minimal seesaw and leptogenesis models\n---\n\nAugust 2020\n\n[*Keywords*]{}: neutrino mass, flavor mixing, CP violation, minimal seesaw, leptogenesis\n\nIntroduction {#section 1}\n============\n\nMassive neutrinos: known and unknown" +"---\nabstract: 'We apply the bottom-up reconstruction technique in the context of bouncing cosmology in F(R) gravity, where the starting point is a suitable ansatz of observable quantity (like spectral index or tensor to scalar ratio) rather than a priori form of Hubble parameter. In inflationary scenario, the slow roll conditions are assumed to hold true, and thus the observational indices have general expressions in terms of the slow-roll parameters, as for example the tensor to scalar ratio in F(R) inflation can be expressed as $r = 48\\epsilon_F^2$ with $\\epsilon_F = -\\frac{1}{H_F^2}\\frac{dH_F}{dt_F}$ and $H_F$, $t_F$ are the Hubble parameter, cosmic time respectively. However, in the bouncing cosmology (say in F(R) gravity theory), the slow-roll conditions are not satisfied, in general, and thus the observable quantities do not have any general expressions that will hold true irrespective of the form of F(R). Thus, in order to apply the bottom-up reconstruction procedure in F(R) bouncing model, we use the conformal correspondence between F(R) and scalar-tensor model where the conformal factor in the present context is chosen in a way such that it leads to an inflationary scenario in the scalar-tensor frame. Due to the reason that the scalar and tensor perturbations remain" +"---\nabstract: 'Eigenvectors of the reduced Bardeen-Cooper-Schrieffer Hamiltonian have recently been employed as a variational wavefunction ansatz in quantum chemistry. This wavefunction is a mean-field of pairs of electrons (geminals). In this contribution we report optimal expressions for their reduced density matrices in both the original physical basis and the basis of the Richardson-Gaudin pairs. Physical basis expressions were originally reported by Gorohovsky and Bettelheim[@GB:2011]. In each case, the expressions scale like $\\mathcal{O}(N^4)$, with the most expensive step the solution of linear equations. Analytic gradients are also reported in the physical basis. These expressions are an important step towards practical mean-field methods to treat strongly-correlated electrons.'\nauthor:\n- 'Charles-\u00c9mile Fecteau'\n- Hubert Fortin\n- Samuel Cloutier\n- 'Paul A. Johnson'\nbibliography:\n- 'BAbasis.bib'\ntitle: 'Reduced density matrices of Richardson-Gaudin states in the Gaudin algebra basis'\n---\n\nIntroduction\n============\n\nAccurate and affordable treatment of strongly-correlated electrons remains a problem in quantum chemistry. In these systems, many Slater determinants are required to capture the correct physical behaviour. If the number of important Slater determinants is small enough, active space methods are affordable and effective. However, as the number of important Slater determinants increases, this becomes impractical and other avenues must be explored." +"---\nabstract: 'Brain age estimation from Magnetic Resonance Images (MRI) derives the difference between a subject\u2019s biological brain age and their chronological age. This is a potential biomarker for neurodegeneration, e.g. as part of Alzheimer\u2019s disease. Early detection of neurodegeneration manifesting as a higher brain age can potentially facilitate better medical care and planning for affected individuals. Many studies have been proposed for the prediction of chronological age from brain MRI using machine learning and specifically deep learning techniques. Contrary to most studies, which use the whole brain volume, in this study, we develop a new deep learning approach that uses 3D patches of the brain as well as convolutional neural networks (CNNs) to develop a localised brain age estimator. In this way, we can obtain a visualization of the regions that play the most important role for estimating brain age, leading to more anatomically driven and interpretable results, and thus confirming relevant literature which suggests that the ventricles and the hippocampus are the areas that are most informative. In addition, we leverage this knowledge in order to improve the overall performance on the task of age estimation by combining the results of different patches using an ensemble method, such" +"---\nabstract: 'Diffusive shock acceleration is a prominent mechanism for producing energetic particles in space and in astrophysical systems. Such energetic particles have long been predicted to affect the hydrodynamic structure of the shock, in turn leading to CR spectra flatter than the test-particle prediction. However, in this work along with a companion paper, [@haggerty+20], , we use self-consistent hybrid (kinetic ions-fluid electrons) simulations to show for the first time how CR-modified shocks actually produce [*steeper*]{} spectra. The steepening is driven by the enhanced advection of CRs embedded in magnetic turbulence downstream of the shock, in what we call the \u201cpostcursor\u201d. These results are consistent with multi-wavelength observations of supernovae and supernova remnants and have significant phenomenological implications for space/astrophysical shocks in general.'\nauthor:\n- Damiano Caprioli\n- 'Colby C. Haggerty'\n- Pasquale Blasi\nbibliography:\n- 'Total.bib'\ntitle: 'Kinetic Simulations of Cosmic-Ray\u2013Modified Shocks II: Particle Spectra'\n---\n\nIntroduction {#sec:intro}\n============\n\nDiffusive Shock Acceleration [DSA, @krymskii77; @bell78a; @blandford+78; @axford+78] is a ubiquitous mechanism for producing relativistic particles and non-thermal emission in many astrophysical environments. This special case of first-order Fermi acceleration involves particles diffusing back and forth across the shock discontinuity and is particularly appealing because it produces power-law distributions in" +"---\nabstract: 'Motivated by analogies between the spread of infections and of chemical processes, we develop a model that accounts for infection and transport where infected populations correspond to chemical species. Areal densities emerge as the key variables, thus capturing the effect of spatial density. We derive expressions for the kinetics of the infection rates, and for the important parameter $R_0$, that include areal density and its spatial distribution. We present results for a batch reactor, the chemical process equivalent of the SIR model, where we examine how the dependence of $R_0$ on process extent, the initial density of infected individuals, and fluctuations in population densities effect the progression of the disease. We then consider spatially distributed systems. Diffusion generates traveling waves that propagate at a constant speed, proportional to the square root of the diffusivity and $R_0$. [Preliminary analysis shows a similar behavior on the effect of stochastic advection.]{}'\nauthor:\n- |\n Harisankar Ramaswamy\\\n Aerospace and Mechanical Engineering\\\n Viterbi School of Engineering\\\n University of Southern California\\\n `hramaswa@usc.edu`\\\n Assad A. Oberai\\\n Aerospace and Mechanical Engineering\\\n Viterbi School of Engineering\\\n University of Southern California\\\n `aoberai@usc.edu`\\\n Yannis C. Yortsos [^1]\\\n Mork Family Department of Chemical Engineering and Materials Science\\\n Viterbi School of" +"---\nabstract: 'We provide an efficient and private solution to the problem of encryption-aware data-driven control. We investigate a Control as a Service scenario, where a client employs a specialized outsourced control solution from a service provider. The privacy-sensitive model parameters of the client\u2019s system are either not available or variable. Hence, we require the service provider to perform data-driven control in a privacy-preserving manner on the input-output data samples from the client. To this end, we co-design the control scheme with respect to both *control performance* and *privacy specifications*. First, we formulate our control algorithm based on recent results from the behavioral framework, and we prove closeness between the classical formulation and our formulation that accounts for noise and\u00a0precision errors arising from encryption. Second, we use a state-of-the-art leveled homomorphic encryption scheme to enable the service provider to perform high complexity computations on the client\u2019s encrypted data, ensuring privacy. Finally, we streamline our solution by exploiting the rich structure of data, and meticulously employing ciphertext batching and rearranging operations to enable parallelization. This solution achieves more than twofold runtime and memory improvements compared to our prior work.'\nauthor:\n- 'Andreea B. Alexandru, Anastasios Tsiamis and George J. Pappas[^1]'" +"---\nabstract: 'In the present investigation we use observational data of $ f \\sigma_ {8} $ to determine observational constraints in the plane $(\\Omega_{m0},\\sigma_{8})$ using two different methods: the growth factor parametrization and the numerical solutions method for density contrast, $\\delta_{m}$. We verified the correspondence between both methods for three models of accelerated expansion: the $\\Lambda CDM$ model, the $ w_{0}w_{a} CDM$ model and the running cosmological constant $RCC$ model. In all case we consider also curvature as free parameter. The study of this correspondence is important because the growth factor parametrization method is frequently used to discriminate between competitive models. Our results we allow us to determine that there is a good correspondence between the observational constrains using both methods. We also test the power of the $ f\\sigma_ {8} $ data to constraints the curvature parameter within the $ \\Lambda CDM $ model. For this we use a non-parametric reconstruction using Gaussian processes. Our results show that the $ f\\sigma_ {8}$ data with the current precision level does not allow to distinguish between a flat and non-flat universe.'\nauthor:\n- 'A. M. Velasquez-Toribio'\n- 'J\u00falio C. Fabris'\ntitle: 'The growth factor parametrization versus numerical solutions in flat and" +"---\nabstract: 'We report the measurement of a spectroscopic transit of TOI-1726 c, one of two planets transiting a G-type star with $V$ = 6.9 in the Ursa Major Moving Group ($\\sim$400 Myr). With a precise age constraint from cluster membership, TOI-1726 provides a great opportunity to test various obliquity excitation scenarios that operate on different timescales. By modeling the Rossiter-McLaughlin (RM) effect, we derived a sky-projected obliquity of $-1^{+35}_{-32}~^{\\circ}$. This result rules out a polar/retrograde orbit; and is consistent with an aligned orbit for planet c. Considering the previously reported, similarly prograde RM measurement of planet b and the transiting nature of both planets, TOI-1726 tentatively conforms to the overall picture that compact multi-transiting planetary systems tend to have coplanar, likely aligned orbits. TOI-1726 is also a great atmospheric target for understanding differential atmospheric loss of sub-Neptune planets (planet b 2.2 $R_\\oplus$ and c 2.7 $R_\\oplus$ both likely underwent photoevaporation). The coplanar geometry points to a dynamically cold history of the system that simplifies any future modeling of atmospheric escape.'\nauthor:\n- Fei Dai\n- Arpita Roy\n- Benjamin Fulton\n- Paul Robertson\n- Lea Hirsch\n- Howard Isaacson\n- Simon Albrecht\n- 'Andrew W. Mann'\n- 'Martti H." +"---\nabstract: |\n We explore features of the scalar structure $X_0(2900)$, which is one of the two resonances discovered recently by LHCb in the $D^{-}K^{+}$ invariant mass distribution in the decay $B^{+} \\to D^{+}D^{-}K^{+}$. We treat $%\n X_0(2900)$ as a hadronic molecule composed of the conventional mesons $%\n \\overline{D}^{* 0} $ and $K^{*0}$ and calculate its mass, coupling and width. The mass and coupling of $X_0(2900)$ are determined using the QCD two-point sum rule method by taking into account quark, gluon, and mixing vacuum condensates up to dimension $15$. The decay of this structure to final state $D^{-}K^{+}$ is investigated in the context of the light-cone sum rule approach supported by a soft-meson technique. To this end, we evaluate strong coupling $G$ corresponding to vertex $X_0D^{-}K^{+}$, which allows us to find width of the decay $X_0(2900) \\to D^{-}K^{+}$. Obtained predictions for the mass of the hadronic molecule $\\overline{D}^{*0 }K^{*0}$ $m=(2868 \\pm 198)~\\mathrm{MeV} $ and for its width $\\Gamma=(49.6 \\pm 9.3)~%\n \\mathrm{MeV}$ can be considered as arguments in favor of molecule interpretation of $X_0(2900)$.\nauthor:\n- 'S.\u00a0S.\u00a0Agaev'\n- 'K.\u00a0Azizi'\n- 'H.\u00a0Sundu'\ntitle: 'New scalar resonance $X_0(2900)$ as a $\\overline{D}^{*}K^{*}$ molecule: Mass and width'\n---\n\nIntroduction {#sec:Int}\n============" +"---\nabstract: 'Color transfer, which plays a key role in image editing, has attracted noticeable attention recently. It has remained a challenge to date due to various issues such as time-consuming manual adjustments and prior segmentation issues. In this paper, we propose to model color transfer under a probability framework and cast it as a parameter estimation problem. In particular, we relate the transferred image with the example image under the Gaussian Mixture Model (GMM) and regard the transferred image color as the GMM centroids. We employ the Expectation-Maximization (EM) algorithm (E-step and M-step) for optimization. To better preserve gradient information, we introduce a Laplacian based regularization term to the objective function at the M-step which is solved by deriving a gradient descent algorithm. Given the input of a source image and an example image, our method is able to generate continuous color transfer results with increasing EM iterations. Various experiments show that our approach generally outperforms other competitive color transfer methods, both visually and quantitatively.'\nauthor:\n- 'Chunzhi Gu, Xuequan Lu, and\u00a0Chao\u00a0Zhang[^1] [^2][^3]'\nbibliography:\n- 'reference.bib'\ntitle: 'Example-based Color Transfer with Gaussian Mixture Modeling'\n---\n\ncolor transfer, Gaussian mixture model.\n\nIntroduction\n============\n\nan image by endowing it" +"---\nabstract: '\\[sec:abstract\\] Marketing mix models (MMMs) are statistical models for measuring the effectiveness of various marketing activities such as promotion, media advertisement, etc. In this research, we propose a comprehensive marketing mix model that captures the hierarchical structure and the carryover, shape and scale effects of certain marketing activities, as well as sign restrictions on certain coefficients that are consistent with common business sense. In contrast to commonly adopted approaches in practice, which estimate parameters in a multi-stage process, the proposed approach estimates all the unknown parameters/coefficients simultaneously using a constrained maximum likelihood approach and solved with the Hamiltonian Monte Carlo algorithm. We present results on real datasets to illustrate the use of the proposed solution algorithm.'\naddress: 'Research & Development, Precima, Chicago, IL 60606'\nauthor:\n- Hao Chen\n- Minguang Zhang\n- Lanshan Han\n- Alvin Lim\nbibliography:\n- 'sample.bib'\ntitle: Hierarchical Marketing Mix Models with Sign Constraints\n---\n\nMarketing Mix Model, Hierarchical Models, Constrained Regression Analysis, Hamiltonian Monte Carlo\n\nIntroduction\n============\n\nMarketing activities, such as TV advertisement, discounting, direct mail, etc., are prevailing approaches for consumer packaged goods manufactures and service providers to enhance their brand awareness and product/service messaging to consumers in order to increase sales." +"---\nabstract: 'Trust and trustworthiness form the basis for continued social and economic interactions, and they are also fundamental for cooperation, fairness, honesty, and indeed for many other forms of prosocial and moral behavior. However, trust entails risks, and building a trustworthy reputation requires effort. So how did trust and trustworthiness evolve, and under which conditions do they thrive? To find answers, we operationalize trust and trustworthiness using the trust game with the trustor\u2019s investment and the trustee\u2019s return of the investment as the two key parameters. We study this game on different networks, including the complete network, random and scale-free networks, and in the well-mixed limit. We show that in all but one case the network structure has little effect on the evolution of trust and trustworthiness. Specifically, for well-mixed populations, lattices, random and scale-free networks, we find that trust never evolves, while trustworthiness evolves with some probability depending on the game parameters and the updating dynamics. Only for the scale-free network with degree non-normalized dynamics, we find parameter values for which trust evolves but trustworthiness does not, as well as values for which both trust and trustworthiness evolve. We conclude with a discussion about mechanisms that could lead" +"---\nabstract: 'Selective mitigation or selective hardening is an effective technique to obtain a good trade-off between the improvements in the overall reliability of a circuit and the hardware overhead induced by the hardening techniques. Selective mitigation relies on preferentially protecting circuit instances according to their susceptibility and criticality. However, ranking circuit parts in terms of vulnerability usually requires computationally intensive fault-injection simulation campaigns. This paper presents a new methodology which uses machine learning clustering techniques to group flip-flops with similar expected contributions to the overall functional failure rate, based on the analysis of a compact set of features combining attributes from static elements and dynamic elements. Fault simulation campaigns can then be executed on a per-group basis, significantly reducing the time and cost of the evaluation. The effectiveness of grouping similar sensitive flip-flops by machine learning clustering algorithms is evaluated on a practical example.Different clustering algorithms are applied and the results are compared to an ideal selective mitigation obtained by exhaustive fault-injection simulation.'\nauthor:\n- \nbibliography:\n- 'IEEEabrv.bib'\n- 'bib/VTS\\_2020.bib'\ntitle: 'Machine Learning Clustering Techniques for Selective Mitigation of Critical Design Features [^1] '\n---\n\n2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be" +"---\nabstract: 'For non-Hermitian quantum models, the dynamics is not apparently reflected by the static properties, e.g., the complex energy spectrum, because of the non-orthogonality of the right eigenvectors, the nonunitarity of the time evolution, the breakdown of the adiabatic theory, etc., but in experiments the time evolution of an initial state is commonly used. Here, we pay attention to the dynamics of an initial end state in nonreciprocal Su-Schrieffer-Heeger models under open boundary conditions, and we find that it is dynamically more robust than its Hermitian counterpart, because the non-Hermitian skin effect can suppress the part leaking to the bulk sites. To observe this, we propose a classical electric circuit with only a few passive inductors and capacitors, the mapping of which to the quantum model is established. This work explains how the non-Hermitian skin effect enhances the robustness of the topological end state, and it offers an easy way, via the classical electric circuit, of studying the nonreciprocal quantum dynamics, which may stimulate more dynamical studies of non-Hermitian models in other platforms.'\nauthor:\n- 'Li-Jun Lang (\u90ce\u5229\u541b)'\n- 'Yijiao Weng (\u7fc1\u76ca\u5a07)'\n- 'Yunhui Zhang (\u5f20\u4e91\u8f89)'\n- 'Enhong Cheng (\u6210\u6069\u5b8f)'\n- 'Qixia Liang (\u6881\u7eee\u971e)'\nbibliography:\n- 'ref.bib'\ntitle: 'Dynamical" +"---\nabstract: 'We introduce a framework to analyzes the conversation between two competing groups of Twitter users, one who believe in the anthropogenic causes of climate change (Believers) and a second who are skeptical (Disbelievers). As a case study, we use Climate Change related tweets during the United Nation\u2019s (UN) Climate Change Conference \u2013 COP24 (2018), Katowice, Poland. We find that both Disbelievers and Believers talk within their group more than with the other group; this is more so the case for Disbelievers than for Believers. The Disbeliever messages focused more on attacking those personalities that believe in the anthropogenic causes of climate change. On the other hand, Believer messages focused on calls to combat climate change. We find that in both Disbelievers and Believers bot-like accounts were equally active and that unlike Believers, Disbelievers get their news from a concentrated number of news sources.'\nauthor:\n- Aman Tyagi\n- Matthew Babcock\n- 'Kathleen M. Carley'\n- 'Douglas C. Sicker'\nbibliography:\n- 'references.bib'\ntitle: 'Polarizing Tweets on Climate Change[^1]'\n---\n\nIntroduction\n============\n\nSocial media platforms such as Twitter have become an important medium for debating and organizing around complex social issues [@climateSandB]. One such complex issue with significant socio-economic and" +"---\nabstract: 'Random batch algorithms are constructed for quantum Monte Carlo simulations. The main objective is to alleviate the computational cost associated with the calculations of two-body interactions, including the pairwise interactions in the potential energy, and the two-body terms in the Jastrow factor. In the framework of variational Monte Carlo methods, the random batch algorithm is constructed based on the over-damped Langevin dynamics, so that updating the position of each particle in an $N$-particle system only requires $\\mathcal{O}(1)$ operations, thus for each time step the computational cost for $N$ particles is reduced from $\\mathcal{O}(N^2)$ to $\\mathcal{O}(N)$. For diffusion Monte Carlo methods, the random batch algorithm uses an energy decomposition to avoid the computation of the total energy in the branching step. The effectiveness of the random batch method is demonstrated using a system of liquid ${}^4$He atoms interacting with a graphite surface.'\nauthor:\n- 'Shi Jin[^1], Xiantao Li[^2]'\nbibliography:\n- 'qmc.bib'\ntitle:\n- TITLE\n- Random Batch Algorithms for Quantum Monte Carlo simulations\n---\n\nIntroduction\n============\n\nOne of the fundamental problems in chemistry is the computation of the ground state energy of a many-body quantum system. Although this major difficulty has been circumvented to some extent by the density-functional" +"---\nabstract: 'Collisionless plasma shocks are efficient sources of non-thermal particle acceleration in space and astrophysical systems. We use hybrid (kinetic ions \u2013 fluid electrons) simulations to examine the non-linear feedback of the self-generated energetic particles (cosmic rays, CRs) on the shock hydrodynamics. When CR acceleration is efficient, we find evidence of both an upstream precursor, where the inflowing plasma is compressed and heated, and a downstream postcursor, where the energy flux in CRs and amplified magnetic fields play a dynamical role. For the first time, we assess how non-linear magnetic fluctuations in the postcursor preferentially travel away from the shock at roughly the local Alfv\u00e9n speed with respect to the downstream plasma. The drift of both magnetic and CR energy with respect to the thermal plasma substantially increases the shock compression ratio with respect to the standard prediction, in particular exceeding 4 for strong shocks. Such modifications also have implications for the spectrum of the particles accelerated via diffusive shock acceleration, a significant result detailed in a companion paper, [@caprioli+20], .'\nauthor:\n- 'Colby C. Haggerty'\n- Damiano Caprioli\nbibliography:\n- 'Total.bib'\ntitle: 'Kinetic Simulations of Cosmic-Ray\u2013Modified Shocks I: Hydrodynamics'\n---\n\n\\[sec:intro\\]Introduction\n=========================\n\nNon-relativistic shocks are abundant in space" +"---\nabstract: 'One of the most interesting problems that arise when studying certain structures on topological spaces and in particular on differential manifolds, is to be able to extend the properties that are valid locally to the whole space. A useful tool, which has perhaps been underestimated, is a lemma introduced by G. Bredon, which we refer to as Bredon\u2019s trick and which allows the extension of local properties to certain topological spaces. We make a review of this result and show its application in the context of De Rham\u2019s cohomology, we will see how this trick allows to give natural alternative demonstrations to classic results, as well as it is fundamental in other cases such as in stratified pseudo-manifolds.'\nauthor:\n- Mauricio Angel\n---\n\nIntroduction\n============\n\nBredon\u2019s trick is a lemma introduced by G. Bredon which allows to extend on a topological space with certain conditions properties that are locally valid to the whole space. In [@Bredon] Bredon refers to the fact that this lemma was originally introduced in 1962 while he was teaching a course on Lie Groups, however, as far as the author knows, there are no references where the lemma is found but in [@Bredon] where" +"---\nabstract: 'Solving the Lexicographic Bottleneck Assignment Problem (LexBAP) typically relies on centralised computation with order $\\mathcal{O}(n^4)$ complexity. We consider the Sequential Bottleneck Assignment Problem (SeqBAP), which yields a greedy solution to the LexBAP and discuss the relationship between the SeqBAP, the LexBAP, and the Bottleneck Assignment Problem (BAP). In particular, we reexamine tools used to analyse the structure of the BAP, and apply them to derive an $\\mathcal{O}(n^3)$ algorithm that solves the SeqBAP. We show that the set of solutions of the LexBAP is a subset of the solutions of the SeqBAP and analyse the conditions for which the solutions sets are identical. Furthermore, we provide a method to verify the satisfaction of these conditions. In cases where the conditions are satisfied, the proposed algorithm for solving the SeqBAP solves the LexBAP with computation that has lower complexity and can be distributed over a network of computing agents. The applicability of the approach is demonstrated with a case study where mobile robots are assigned to goal locations.'\naddress:\n- 'Department of Electrical and Electronic Engineering at the University of Melbourne, Melbourne, Australia'\n- 'Sycamore, \u00c9cole Polytechnique F\u00e9d\u00e9rale de Lausanne (EPFL), Lausanne, Switzerland'\n- 'CIICADA Lab, School of Engineering, Australia" +"---\nabstract: 'We study neutrinos and dark matter based on a gauged $U(1)_R$ symmetry in the framework of a radiative seesaw scenario. We identify the dark matter as a bosonic particle that interacts with the quark and the lepton sectors through vector-like heavier quarks and leptons. The dark matter also plays a role in generating the neutrino mass matrix with the neutral heavier fermions. We explore several constraints for the masses and the couplings related to the dark matter by computing the relic density and the scattering cross sections for direct detection methods, taking into consideration neutrino oscillations, lepton flavor violations, and the muon anomalous magnetic moment. Finally, we mention the semileptonic decays and the neutral meson mixings that occur through the dark matter by one-loop box diagrams.'\nauthor:\n- 'Keiko I. Nagao'\n- Hiroshi Okada\ntitle: 'Neutrino and dark matter in a gauged $U(1)_R$ symmetry'\n---\n\n[APCTP Pre2020-020]{}\n\nIntroduction\n============\n\nIt is important to understand the neutrino sector, which is typically discussed in beyond the standard model (SM) scenarios. The simplest way to construct the neutrino mass matrix is to introduce heavier right-handed neutrinos in a renormalizable theory. If one needs a principle to introduce them, a gauged Baryon" +"---\nabstract: 'Pair density waves, identified by Cooper pairs with finite center-of-mass momentum, have recently been observed in copper oxide based high T$_\\textrm{c}$ superconductors (cuprates). A charge density modulation or wave is also ubiquitously found in underdoped cuprates. Within a general mean-field one-band model we show that the coexistence of charge density waves and uniform superconductivity in $d$-wave superconductors like cuprates, generates an odd-frequency spin-singlet pair density wave, in addition to the even-frequency counterparts. The strength of the induced odd-frequency pair density wave depends on the modulation wave vector of the charge density wave, with the odd-frequency pair density waves even becoming comparable to the even-frequency ones in parts of the Brillouin zone. We show that a change in the modulation wave vector of the charge density wave from bi-axial to uni-axial, can enhance the odd-frequency component of the pair density waves. Such a coexistence of superconductivity and uni-axial charge density wave has already been experimentally verified at high magnetic fields in underdoped cuprates. We further discuss the possibility of an odd-frequency spin-triplet pair density wave generated in the coexistence regime of superconductivity and spin density waves, applicable to the iron-based superconductors. Our work thus presents a route to bulk" +"---\nabstract: 'We propose a machine-learning inspired variational method to obtain the Liouvillian gap, which plays a crucial role in characterizing the relaxation time and dissipative phase transitions of open quantum systems. By using the \u201cspin bi-base mapping\u201d, we map the density matrix to a pure restricted-Boltzmann-machine (RBM) state and transform the Liouvillian superoperator to a rank-two non-Hermitian operator. The Liouvillian gap can be obtained by a variational real-time evolution algorithm under this non-Hermitian operator. We apply our method to the dissipative Heisenberg model in both one and two dimensions. For the isotropic case, we find that the Liouvillian gap can be analytically obtained and in one dimension even the whole Liouvillian spectrum can be exactly solved using the Bethe ansatz method. By comparing our numerical results with their analytical counterparts, we show that the Liouvillian gap could be accessed by the RBM approach efficiently to a desirable accuracy, regardless of the dimensionality and entanglement properties.'\nauthor:\n- Dong Yuan\n- 'He-Ran Wang'\n- Zhong Wang\n- 'Dong-Ling Deng'\nbibliography:\n- 'Dengbib.bib'\n- 'Dongbib.bib'\n- 'Heranbib.bib'\n- 'LGviaRBM.bib'\n- 'NonHermRefs.bib'\ntitle: Solving the Liouvillian Gap with Artificial Neural Networks\n---\n\n[^1]\n\n[^2]\n\nStudies of open quantum systems have attracted tremendous" +"---\nabstract: 'A conventional and rotating magnetoelectric effect of a half-filled spin-electron model on a doubly decorated square lattice is investigated by exact calculations. An importance of the electron hopping and spatial orientation of the electric field upon a magnetoelectric effect is examined in detail. A critical temperature may display one or two consecutive round maxima as a function of the electric field. Although the rotating magnetoelectric effect (RME) does not affect the ground-state ordering, the pronounced RME is found close to a critical temperature of continuous phase transition. It is shown that RME is amplified upon strengthening of the electric field, which additionally supports thermal fluctuations in destroying a spontaneous antiferromagnetic long-range order.'\nauthor:\n- 'Hana \u010cen\u010darikov\u00e1$^{1}$ and Jozef Stre\u010dka$^2$'\ntitle: 'Conventional and rotating magnetoelectric effect of a half-filled spin-electron model on a doubly decorated square lattice'\n---\n\nIntroduction {#sec:introduction}\n============\n\nMore than 3000 publications dealing with a magnetoelectric effect registered in research databases (Web of Science and Scopus) during last five years document a huge attractiveness of unconventional materials, whose magnetic properties can be controlled by an electric field. It is apparent that a wide range of practical applications e.g., in the spintronics, automation engineering, security, navigation or" +"---\nabstract: 'In this paper, we give a singular function on the unit interval derived from the dynamic of the one-dimensional elementary cellular automaton Rule $150$. We describe properties of the resulting function, that is strictly increasing, uniformly continuous, and differentiable almost everywhere, and we show that it is not differentiable at dyadic rational points. We also give functional equations that the function satisfies, and show that the function is the only solution of the functional ones.'\nauthor:\n- |\n Akane Kawaharada[^1]\\\n Department of Mathematics, Kyoto University of Education\ntitle: 'Singular function emerging from one-dimensional elementary cellular automaton Rule $150$'\n---\n\n: cellular automaton, fractal, singular function\n\nIntroduction\n============\n\nThere exist many pathological functions. The Weierstrass function and the Takagi function, for example, are real-valued functions that are continuous everywhere but nowhere differentiable [@weier1872; @takagi1903]. Generalized results of the Takagi function were given in [@hatayama1984]. Okamoto\u2019s function is a one-parameter family of self-affine functions whose differentiability is determined by the parameter; it is differentiable almost everywhere, non-differentiable almost everywhere, or nowhere differentiable [@okamoto2005; @okamoto2007; @kobayashi2009]. A singular function is defined by monotonically increasing (or decreasing), continuous everywhere, and has zero derivative almost everywhere. The Cantor function is an example of" +"---\nabstract: 'Variational algorithms have particular relevance for near-term quantum computers but require non-trivial parameter optimisations. Here we propose Analytic Descent: Given that the energy landscape must have a certain simple form in the local region around any reference point, it can be efficiently approximated in its entirety by a classical model \u2013 we support these observations with rigorous, complexity-theoretic arguments. One can classically analyse this approximate function in order to directly \u2018jump\u2019 to the (estimated) minimum, before determining a more refined function if necessary. We derive an optimal measurement strategy and generally prove that the asymptotic resource cost of a \u2018jump\u2019 corresponds to only a single gradient vector evaluation.'\nauthor:\n- B\u00e1lint Koczor\n- 'Simon C. Benjamin'\nbibliography:\n- 'bibliography.bib'\ntitle: Quantum Analytic Descent\n---\n\nIntroduction\n============\n\nQuantum devices have already been announced whose behaviour cannot be simulated using classical computers with practical levels of resource\u00a0[@GoogleSupremacy; @PhysRevLett.127.180502; @PhysRevLett.127.180501; @ebadi2021quantum]. In this era, quantum computers may have the potential to perform useful tasks of value. The early machines will not have a comprehensive solution to accumulating noise\u00a0[@preskill2018quantum], and therefore it is a considerable and fascinating challenge to achieve a valuable function despite the imperfections. One very promising class" +"---\nabstract: 'We introduce the notion of *consistent error bound functions* which provides a unifying framework for error bounds for multiple convex sets. This framework goes beyond the classical Lipschitzian and H\u00f6lderian error bounds and includes logarithmic and entropic error bounds found in the exponential cone. It also includes the error bounds obtainable under the theory of amenable cones. Our main result is that the convergence rate of several projection algorithms for feasibility problems can be expressed explicitly in terms of the underlying consistent error bound function. Another feature is the usage of [Karamata theory]{} and functions of regular variations which allows us to reason about convergence rates while bypassing certain complicated expressions. Finally, applications to conic feasibility problems are given and we show that a number of algorithms have convergence rates depending explicitly on the singularity degree of the problem.'\nauthor:\n- 'Tianxiang Liu[^1]'\n- 'Bruno F. Louren\u00e7o[^2]'\nbibliography:\n- 'bib\\_plain.bib'\ntitle: Convergence analysis under consistent error bounds\n---\n\n[*Key words:*]{} error bounds; consistent error bound; convergence rate; amenable cones; regular variation; Karamata theory.\n\nIntroduction\n============\n\nIn this paper, we consider the following convex feasibility problem (CFP) $$\\label{CFP}\n{\\rm find}\\ x\\in C: = \\bigcap_{i = 1}^mC_i, \\tag{CFP}$$ where $C_1," +"---\nabstract: 'We examine ac driven skyrmions interacting with the interface between two different obstacle array structures. We consider drive amplitudes at which skyrmions in a bulk obstacle lattice undergo only localized motion and show that when an obstacle lattice interface is introduced, directed skyrmion transport can occur along the interface. The skyrmions can be guided by a straight interface and can also turn corners to follow the interface. For a square obstacle lattice embedded in a square pinning array with a larger lattice constant, we find that skyrmions can undergo transport in all four primary symmetry directions under the same fixed ac drive. We map where localized or translating motion occurs as a function of the ac driving parameters. Our results suggest a new method for controlling skyrmion motion based on transport along obstacle lattice interfaces.'\naddress:\n- 'Theoretical Division and Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA'\n- 'POSMAT - Programa de P['' o]{}s-Gradua[\u00e7]{}[\\~ a]{}o em Ci[\\^ e]{}ncia e Tecnologia de Materiais, Faculdade de Ci\u00eancias, Universidade Estadual Paulista - UNESP, Bauru, SP, CP 473, 17033-360, Brazil'\n- 'Departamento de F\u00edsica, Faculdade de Ci\u00eancias, Universidade Estadual Paulista - UNESP, Bauru, SP, CP" +"---\nabstract: 'We study the fate of many-body localization (MBL) in the presence of long range hopping ($\\sim 1/r^{\\sigma}$) in a system subjected to an electric field (static and time-periodic) along with a slowly-varying aperiodic potential. We show that the MBL in the static electric-field model is robust against arbitrary long-range hopping in sharp contrast to other disordered models, where MBL is killed by sufficiently long-range hopping. Next, we show that the drive-induced phenomena associated with an ac square wave electric field are also robust against long-range hopping. Specifically, we obtain drive-induced MBL, where a high-frequency drive can convert the ergodic phase into the MBL phase. Remarkably, we find that a coherent destruction of MBL is also possible with the aid of a resonant drive. Thus in both the static and time-periodic square wave electric field models, the qualitative properties of the system are independent of whether the hopping is short-ranged or long-ranged.'\nauthor:\n- Devendra Singh Bhakuni\n- Auditya Sharma\nbibliography:\n- 'ref.bib'\ntitle: 'Stability of electric-field-driven MBL in an interacting long range hopping model'\n---\n\nIntroduction\n============\n\nMany-body localization (MBL)\u00a0[@basko2006metal; @nandkishore2015many; @PhysRevB.21.2366; @PhysRevLett.95.206603], in which localization is known to persist even in the presence of many-body interactions," +"---\nabstract: 'We investigate the photoassociation dynamics of exactly two laser-cooled $^{85}$Rb atoms in an optical tweezer and reveal fundamentally different behavior to photoassociation in many-atom ensembles. We observe non-exponential decay in our two-atom experiment that cannot be described by a single rate coefficient and find its origin in our system\u2019s pair correlation. This is in stark contrast to many-atom photoassociation dynamics, which are governed by decay with a single rate coefficient. We also investigate photoassociation in a three-atom system, thereby probing the transition from two-atom dynamics to many-atom dynamics. Our experiments reveal additional reaction dynamics that are only accessible through the control of single atoms and suggest photoassociation could measure pair correlations in few-atom systems. It further showcases our complete control over the quantum state of individual atoms and molecules, which provides information unobtainable from many-atom experiments.'\nauthor:\n- 'M.\u00a0Weyland'\n- 'S.\u00a0S.\u00a0Szigeti'\n- 'R.\u00a0A.\u00a0B.\u00a0Hobbs'\n- 'P.\u00a0Ruksasakchai'\n- 'L.\u00a0Sanchez'\n- 'M.\u00a0F.\u00a0Andersen'\nnocite: '[@Chernick_Bootstrap]'\ntitle: Pair Correlations and Photoassociation Dynamics of Two Atoms in an Optical Tweezer\n---\n\nChemical processes govern the natural world and are used to create desired molecular structures. Such reactions usually occur in macroscopic samples of atoms" +"---\nabstract: 'We investigate the Bianchi-I cosmological model in presence of generalized Chaplygin gas (GCG), variable gravitational and cosmological constant. The exact solutions of Einstein field equations are obtained with time varying periodic deceleration parameter. The graphical representation method has been used to discuss the physical and dynamical behaviour of the model. Further, the stability and physical acceptability of the obtained solutions have been investigated. Most of the parameters shows periodic behaviour in this study due to the presence of cosine function in the deceleration parameter. In all the cases, pressure is negative, which leads us to late time expansion of the universe. The considered models are found to be stable.'\nauthor:\n- 'N. Hulke'\n- 'G. P. Singh'\n- 'Binaya K. Bishi'\ntitle: 'Bianchi-I cosmology with generalized Chaplygin gas and periodic deceleration parameter.'\n---\n\n**Keywords:** Bianchi-I; periodic deceleration parameter; dark energy; cosmological constant\n\nIntroduction\n============\n\nCosmological and astronomical data [@1; @4; @51; @6; @7; @8; @9] reveals that the universe is currently undergoing accelerating expansion and it has been originated with a bang from phase of very high density and temperature. For a long time, it was believed that either the universe will expand eternally or the inward pull" +"---\nabstract: 'The power conversion efficiency of the market-dominating silicon photovoltaics approaches its theoretical limit. Bifacial solar operation with harvesting additional light impinging on the module back and the perovskite/ silicon tandem device architecture are among the most promising approaches for further increasing the energy yield from a limited area. Here, we calculate the energy output of perovskite/silicon tandem solar cells in monofacial and bifacial operation considering, for the first time, luminescent coupling between two sub-cells. For energy yield calculations we study idealized solar cells at both, standard testing as well as realistic weather conditions in combination with a detailed illumination model for periodic solar panel arrays. Considering typical, experimental photoluminescent quantum yield values we find that more than 50% of excess electron-hole pairs in the perovskite top cell can be utilized by the silicon bottom cell by means of luminescent coupling. As a result, luminescent coupling strongly relaxes the constraints on the top-cell bandgap in monolithic tandem devices. In combination with bifacial operation, the optimum perovskite bandgap shifts from 1.71eV to the range 1.60-1.65eV where already high-quality perovskite materials exist. The results can hence change a paradigm in developing the optimum perovskite material for tandem solar cells.'\nauthor:\n-" +"---\nabstract: 'A fundamental problem posed from the study of correlated electron compounds, of which heavy-fermion systems are prototypes, is the need to understand the physics of states near a quantum critical point (QCP). At a QCP, magnetic order is suppressed continuously to zero temperature and unconventional superconductivity often appears. Here, we report pressure ($P$) -dependent $^{115}$In nuclear quadrupole resonance (NQR) measurements on heavy-fermion antiferromagnet CeRh$_{0.5}$Ir$_{0.5}$In$_5$. These experiments reveal an antiferromagnetic (AF) QCP at $P_{\\rm c}^{\\rm AF}$ = 1.2 GPa where a dome of superconductivity reaches a maximum transition temperature $T_{\\rm c}$. Preceding $P_{\\rm c}^{\\rm AF}$, however, the NQR frequency $\\nu_{\\rm Q}$ undergoes an abrupt increase at $P_{\\rm c}^{\\rm *}$ = 0.8 GPa in the zero-temperature limit, indicating a change from localized to itinerant character of cerium\u2019s $f$-electron and associated small-to-large change in the Fermi surface. At $P_{\\rm c}^{\\rm AF}$ where $T_{\\rm c}$ is optimized, there is an unusually large fraction of gapless excitations well below $T_{\\rm c}$ that implicates spin-singlet, odd-frequency pairing symmetry.'\nauthor:\n- Shinji Kawasaki\n- Toshihide Oka\n- Akira Sorime\n- Yuji Kogame\n- Kazuhiro Uemoto\n- Kazuaki Matano\n- Jing Guo\n- Shu Cai\n- Liling Sun\n- 'John L. Sarrao'\n- 'Joe D. Thompson'" +"---\nabstract: 'Prediction of protein-ligand complexes for flexible proteins remains still a challenging problem in computational structural biology and drug design. Here we present two novel deep neural network approaches with significant improvement in efficiency and accuracy of binding mode prediction on a large and diverse set of protein systems compared to standard docking. Whereas the first graph convolutional network is used for re-ranking poses the second approach aims to generate and rank poses independent of standard docking approaches. This novel approach relies on the prediction of distance matrices between ligand atoms and protein C$_\\alpha$ atoms thus incorporating side-chain flexibility implicitly.'\nauthor:\n- 'Amr H. Mahmoud'\n- 'Jonas F. Lill'\n- 'Markus A. Lill'\nbibliography:\n- 'acs-achemso.bib'\ntitle: 'Graph-convolution neural network-based flexible docking utilizing coarse-grained distance matrix'\n---\n\nIntroduction\n============\n\nStructure-based drug design is an essential tool and an important pillar in Computer-aided Drug Design (CADD) for efficient lead discovery and optimization. CADD methods such as docking aim to identify novel binders to a target protein and to predict the structure of protein-ligand complexes. Docking is still widely applied using a rigid protein as template in CADD projects, ignoring the representation of the different conformations that the binding-site can assume." +"---\nabstract: 'Schr\u00f6dinger cat states are useful for many applications, ranging from quantum information processing to high-precision measurements. In this paper we propose a conceptually new method for creating such cat states, based on photon-assisted Landau-Zener-St\u00fcckelberg interferometry in a hybrid system consisting of a qubit coupled to a photon cavity. We show that by initializing the qubit in one of its basis states, performing three consecutive sweeps of the qubit energy splitting across the 1-photon resonance, and finally projecting the qubit to the same basis state, the parity of the photon field can be purified to very high degree; when the initial photon state is a coherent state, the final state will then be very close to a Schr\u00f6dinger cat state. We present numerical simulations that confirm that our protocol could work with high fidelity ($\\sim 0.99$) for coherent states of reasonable size ($|\\alpha|^2 \\sim 10$). Furthermore, we suggest that our protocol can also be used to transfer quantum information between the qubit and a superposition of orthogonal cat states in the cavity.'\nauthor:\n- Jonas Lidal\n- Jeroen Danon\ntitle: |\n Generation of Schr\u00f6dinger cat states\\\n through photon-assisted Landau-Zener-St\u00fcckelberg interferometry\n---\n\n\\[sec:intro\\]Introduction\n=========================\n\nA coherent state is a quantum" +"---\nabstract: 'The article explores a new formalism for describing motion in quantum mechanics. The construction is based on generalized coherent states with evolving fiducial vector. Weyl-Heisenberg coherent states are utilised to split quantum systems into \u2018classical\u2019 and \u2018quantum\u2019 degrees of freedom. The decomposition is found to be equivalent to quantum mechanics perceived from a semi-classical frame. The split allows for introduction of a new definition of classical state and is a convenient starting point for approximate analysis of quantum dynamics. An example of a meta-stable state is given as a practical illustration of the introduced concepts.'\nauthor:\n- |\n Artur Miroszewski\\\n *artur.miroszewski@ncbj.gov.pl*\nbibliography:\n- 'references.bib'\ntitle: 'Quantum dynamics in Weyl-Heisenberg coherent states'\n---\n\nIntroduction\n============\n\nCoherent states have occupied physicists and mathematicians for almost a century. First introduced in 1926 by Edwin Schr\u00f6dinger [@Schrodinger:1926] in their standard formulation and studied by John von Neumann [@vonNeumann:1932] from the phase space perspective they were forgotten until beginning of the 1960s. Recognizing their usefulness in the subject of atomic optics [@Glauber:1963I; @Glauber:1963II], introduction of the concept of generalized coherent states [@Klauder:1968; @Klauder:1963I; @Klauder:1963II] and their connection to group theory [@Perelomov:1972] resulted in unflagging interest in coherent states until today. Their success in" +"---\nabstract: 'Matrix product states (MPS) and \u2018dressed\u2019 ground states of quadratic mean fields (e.g. Gutzwiller projected Slater Determinants) are both important classes of variational wave-functions. This latter class has played important roles in understanding superconductivity and quantum spin-liquids. We present a novel method to obtain both the finite and infinite MPS (iMPS) representation of the ground state of an arbitrary fermionic quadratic mean-field Hamiltonian, (which in the simplest case is a Slater determinant and in the most general case is a Pfaffian). We also show how to represent products of such states (e.g. determinants times Pfaffians). From this representation one can project to single occupancy and evaluate the entanglement spectra after Gutzwiller projection. We then obtain the MPS and iMPS representation of Gutzwiller projected mean-field states that arise from the variational slave-fermion approach to the $S=1$ Bilinear-Biquadratic (BLBQ) quantum spin chain. To accomplish this, we develop an approach to orthogonalize degenerate iMPS to find all the states in the degenerate ground-state manifold. We find the energies of the MPS and iMPS states match the variational energies closely indicating the method is accurate and there is minimal loss due to truncation error. We then present the first exploration of the" +"---\nabstract: 'Ramsey spectroscopy via coherent population trapping (CPT) is essential in precision measurements. The conventional CPT-Ramsey fringes contain numbers of almost identical oscillations and so that it is difficult to identify the central fringe. Here, we experimentally demonstrate a temporal spinwave Fabry-P\u00e9rot interferometry via double-$\\Lambda$ CPT of laser-cooled $^{87}$Rb atoms. Due to the constructive interference of temporal spinwaves, the transmission spectrum appears as a comb of equidistant peaks in frequency domain and thus the central Ramsey fringe can be easily identified. From the optical Bloch equations for our five-level double-$\\Lambda$ system, the transmission spectrum is analytically explained by the Fabry-P\u00e9rot interferometry of temporal spinwaves. Due to small amplitude difference between the two Land\u00e9 factors, each peak splits into two when the external magnetic field is not too weak. This peak splitting can be employed to measure an unknown magnetic field without involving magneto-sensitive transitions.'\nauthor:\n- Ruihuan Fang\n- Chengyin Han\n- Xunda Jiang\n- Yuxiang Qiu\n- Yuanyuan Guo\n- Minhua Zhao\n- Jiahao Huang\n- Bo Lu\n- Chaohong Lee\ntitle: 'Temporal Spinwave Fabry-P\u00e9rot Interferometry via Coherent Population Trapping'\n---\n\nCoherent population trapping (CPT)\u00a0[@Gray:78], a result of destructive quantum interference between different transition paths, is of" +"---\nabstract: 'We present an imitation learning method for autonomous drone patrolling based only on raw videos. Different from previous methods, we propose to let the drone learn patrolling in the air by observing and imitating how a human navigator does it on the ground. The observation process enables the automatic collection and annotation of data using inter-frame geometric consistency, resulting in less manual effort and high accuracy. Then a newly designed neural network is trained based on the annotated data to predict appropriate directions and translations for the drone to patrol in a lane-keeping manner as humans. Our method allows the drone to fly at a high altitude with a broad view and low risk. It can also detect all accessible directions at crossroads and further carry out the integration of available user instructions and autonomous patrolling control commands. Extensive experiments are conducted to demonstrate the accuracy of the proposed imitating learning process as well as the reliability of the holistic system for autonomous drone navigation. The codes, datasets as well as video demonstrations are available at .'\nauthor:\n- 'Yue Fan$^{1,2}$, Shilei Chu$^{1}$, Wei Zhang$^{1}$\\*, Ran Song$^{1}$\\*, and Yibin Li$^{1}$ [^1] [^2] [^3] [^4]'\nbibliography:\n- 'bib.bib'\ntitle:" +"---\nabstract: |\n In this paper, we study the problem of balancing effectiveness and efficiency in automated feature selection. Feature selection is to find the optimal feature subset from large-scale feature space, and is a fundamental intelligence for machine learning and predictive analysis. After exploring many feature selection methods, we observe a computational dilemma: 1) traditional feature selection methods (e.g., K-Best, decision tree based ranking, mRMR) are mostly efficient, but difficult to identify the best subset; 2) the emerging reinforced feature selection methods automatically navigate feature space to explore the best subset, but are usually inefficient. Are automation and efficiency always apart from each other? Can we bridge the gap between effectiveness and efficiency under automation? Motivated by such a computational dilemma, this study is to develop a novel feature space navigation method. To that end, we propose an Interactive Reinforced Feature Selection (IRFS) framework that guides agents by not just self-exploration experience, but also diverse external skilled trainers to accelerate learning for feature exploration. Specifically, we formulate the feature selection problem into an interactive reinforcement learning framework. In this framework, we first model two trainers skilled at different searching strategies: (1) KBest based trainer; (2) Decision Tree based trainer." +"---\nabstract: 'The $CP$ violation in the neutrino transition electromagnetic dipole moment is discussed in the context of the Standard Model with an arbitrary number of right-handed singlet neutrinos. A full one-loop calculation of the neutrino electromagnetic form factors is performed in the Feynman gauge. A non-zero $CP$ asymmetry is generated by a required threshold condition for the neutrino masses along with non-vanishing $CP$ violating phases in the lepton flavour mixing matrix. We follow the paradiagm of $CP$ violation in neutrino oscillations to parametrise the flavour mixing contribution into a series of Jarlskog-like parameters. This formalism is then applied to a minimal seesaw model with two heavy right-handed neutrinos denoted $N_1$ and $N_2$. We observe that the $CP$ asymmetries for decays into light neutrinos $N\\to \\nu\\gamma$ are extremely suppressed, maximally around $10^{-17}$. However the $CP$ asymmetry for $N_2 \\to N_1 \\gamma$ can reach of order unity. Even if the Dirac $CP$ phase $\\delta$ is the only source of $CP$ violation, a large $CP$ asymmetry around $10^{-5}$-$10^{-3}$ is comfortably achieved.'\n---\n\n[**$CP$ violation in neutral lepton transition dipole moment**]{}\n\n[**Shyam Balaji,$^1$**]{}[^1] \u00a0 [**Maura Ramirez-Quezada$^2$**]{}[^2] \u00a0 and [**Ye-Ling Zhou$^3$**]{}[^3]\\\n\\\n[$^2$ Institute for Particle Physics Phenomenology, Department of Physics,\\\nDurham University, Durham DH1 3LE," +"---\nauthor:\n- \nbibliography:\n- 'references.bib'\ntitle: 'Puzzle-AE: Novelty Detection in Images through Solving Puzzles'\n---\n\n[Shell : Bare Demo of IEEEtran.cls for Computer Society Journals]{}\n\nis defined as any digression from the essential features of any given phenomenon. The main task of novelty detection is to infer deviated features from extracted normal training samples\u2019 features. For instance, having a model trained on healthy brain ct-scan images, it should be able to find non-healthy test input images by comparing current extracted features, and the expected ones with different metrics [@chalapathy2019deep; @chen2018unsupervised; @baur2018deep].\n\n![ Reconstruction of normal and anomalous inputs during the testing phase. As it is shown, the model is unable to solve the puzzle for anomalous inputs, which do not have the main features existing in the normal data. As a result, anomalous samples produce high reconstruction loss, whereas normal inputs have low reconstruction loss since their puzzles are perfectly solved by the model.[]{data-label=\"Abstract\"}](photos/First_photo3.pdf){width=\"\\linewidth\"}\n\nAlthough has been used as the primary distinctive metric between binary classifiers\u2019 performances, this criterion is not sufficient alone. That is because shows the average performance of a model in different operating points. However, a fixed operating point of the curve is needed in many" +"---\nabstract: 'A Kaufmann model is an $\\omega_1$-like, recursively saturated, rather classless model of $\\PA$ (or ${\\ensuremath{\\mathsf{ZF}}\\xspace}$). Such models were constructed by Kaufmann under the combinatorial principle $\\diamondsuit_{\\omega_1}$ and Shelah showed they exist in ${\\ensuremath{\\mathsf{ZFC}}\\xspace}$ by an absoluteness argument. Kaufmann models are an important witness to the incompactness of $\\omega_1$ similar to Aronszajn trees. In this paper we look at some set theoretic issues related to this motivated by the seemingly na\u00efve question of whether such a model can be \u201ckilled\" by forcing without collapsing $\\omega_1$. We show that the answer to this question is independent of ${\\ensuremath{\\mathsf{ZFC}}\\xspace}$ and closely related to similar questions about Aronszajn trees. As an application of these methods we also show that it is independent of ${\\ensuremath{\\mathsf{ZFC}}\\xspace}$ whether or not Kaufmann models can be axiomatized in the logic $L_{\\omega_1, \\omega} (Q)$ where $Q$ is the quantifier \u201cthere exists uncountably many\".'\naddress: 'Institut f\u00fcr Mathematik, Kurt G\u00f6del Research Center, Universit\u00e4t Wien, Kolingasse 14-16, 1090 Wien, AUSTRIA'\nauthor:\n- Corey Bacal Switzer\nbibliography:\n- 'mopabib.bib'\ntitle: Destructibility and Axiomatizability of Kaufmann Models\n---\n\nIntroduction\n============\n\nA Kaufmann model is an $\\omega_1$-like, recursively saturated, rather classless model (these terms are defined below). Kaufmann first constructed such models for" +"---\nabstract: |\n We obtain a first moment formula for Rankin-Selberg convolution $L$-series of holomorphic modular forms or Maass forms of arbitrary level on $\\operatorname{GL}(2)$, with an orthonormal basis of Maass forms. One consequence is the best result to date, uniform in level, spectral value and weight, for the equality of two Maass or holomorphic cusp forms if their Rankin-Selberg convolutions with the orthonormal basis of Maass forms $u_j$ is equal at the center of the critical strip for sufficiently many $u_j$.\n\n The main novelty of our approach is the new way the error terms are treated. They are brought into an exact form that provides optimal estimates for this first moment case, and also provide a basis for an extension to second moments, which will appear in another work.\naddress:\n- 'Mathematics Department, Brown University, Providence RI 02912, USA '\n- 'School of Mathematics, University of Bristol, Bristol, BS8 1TW, UK'\n- 'Mathematics Department, Northwestern University, Evanston, IL 60208, USA'\nauthor:\n- 'Jeff Hoffstein, Min Lee and Maria Nastasescu'\ntitle: 'First moments of Rankin-Selberg convolutions of automorphic forms on $\\operatorname{GL}(2)$'\n---\n\nIntroduction {#s:intro}\n============\n\nThe objective of this paper is to obtain a first moment formula for Rankin-Selberg convolution" +"---\nabstract: 'The fifth edition of the \u201cComputing Applications in Particle Physics\u201d school was held on 3-7 February 2020, at \u0130stanbul University, Turkey. This particular edition focused on the processing of simulated data from the Large Hadron Collider collisions using an Analysis Description Language and its runtime interpreter called CutLang. 24 undergraduate and 6 graduate students were initiated to collider data analysis during the school. After 3 days of lectures and exercises, the students were grouped into teams of 3 or 4 and each team was assigned an analysis publication from ATLAS or CMS experiments. After 1.5 days of independent study, each team was able to reproduce the assigned analysis using CutLang.'\nauthor:\n- |\n A. Ad[i]{}g\u00fczel$^{1,4}$, O. \u00c7ak[i]{}r$^2$, \u00dc. Kaya$^3$, V. E. \u00d6zcan$^{3,4}$,\\\n S. \u00d6zt\u00fcrk$^5$, S. Sekmen$^6$, \u0130. T\u00fcrk \u00c7ak[i]{}r$^7$, G. \u00dcnel$^8$\ndate: |\n $^1$ \u0130stanbul University, Physics Dept., \u0130stanbul, Turkey\\\n $^2$ Ankara University, Physics Dept., Ankara, Turkey\\\n $^3$ Bo\u011fazi\u00e7i University, Physics Dept., \u0130stanbul, Turkey\\\n $^4$ Bo\u011fazi\u00e7i University, Feza G\u00fcrsey Center for Physics and Mathematics, \u0130stanbul, Turkey\\\n $^5$ Tokat Gaziosmanpa\u015fa University, Physics Dept., Tokat, Turkey\\\n $^6$ Kyungpook National University, Physics Dept., Daegu, South Korea\\\n $^7$ Giresun University, Dept. of Energy Systems Engineering, Giresun, Turkey\\\n $^8$ University of California, Irvine," +"---\nabstract: 'Predicting a customer\u2019s propensity-to-pay at an early point in the revenue cycle can provide organisations many opportunities to improve the customer experience, reduce hardship and reduce the risk of impaired cash flow and occurrence of bad debt. With the advancements in data science; machine learning techniques can be used to build models to accurately predict a customer\u2019s propensity-to-pay. Creating effective machine learning models without access to large and detailed datasets presents some significant challenges. This paper presents a case-study, conducted on a dataset from an energy organisation, to explore the uncertainty around the creation of machine learning models that are able to predict residential customers entering financial hardship which then reduces their ability to pay energy bills. Incorrect predictions can result in inefficient resource allocation and vulnerable customers not being proactively identified. This study investigates machine learning models\u2019 ability to consider different contexts and estimate the uncertainty in the prediction. Seven models from four families of machine learning algorithms are investigated for their novel utilisation. A novel concept of utilising a Baysian Neural Network to the binary classification problem of propensity-to-pay energy bills is proposed and explored for deployment.'\nauthor:\n- Md Abul Bashar^a^\n- 'Astin-Walmsley Kieren^ab^'\n-" +"---\nabstract: 'Jacobi\u2019s method is a well-known algorithm in linear algebra to diagonalize symmetric matrices by successive elementary rotations. We report here about the generalization of these elementary rotations towards canonical transformations acting in Hamiltonian phase spaces. This generalization allows to use Jacobi\u2019s method in order to compute eigenvalues and eigenvectors of Hamiltonian (and skew-Hamiltonian) matrices with either purely real or purely imaginary eigenvalues by successive elementary \u201cdecoupling\u201d transformations.'\nauthor:\n- 'C. Baumgarten'\nbibliography:\n- 'jacobi\\_paper.bib'\ntitle: 'A Jacobi Algorithm in Phase Space: Diagonalizing (skew-) Hamiltonian and Symplectic Matrices with Dirac-Majorana Matrices'\n---\n\n\u00a7\n\n\\#1 \\#1\n\n> The real importance of Einstein\u2019s work was that he introduced Lorentz transformations as something fundamental in physics \u2013 P.A.M. Dirac\u00a0[@DiracLT]\n\nIntroduction\n============\n\nThe problem of eigenvector and eigenvalue computation of (skew-) symmetric, Hamiltonian and symplectic matrices received considerable attention in the past [^1]. Here we describe a method that is entirely base on pure Hamiltonian (symplectic) notions. We shall develop the real Clifford algebra $Cl(3,1)$ from algebraic Hamiltonian symmetries and prove a morphism between the group of linear symplectic transformations in classical phase space and the Lorentz group.\n\nJacobi\u2019s Method is a well known numerical method that allows to diagonalize symmetric matrices[^2]." +"---\nabstract: 'Machine learning (ML) offers a collection of powerful approaches for detecting and modeling associations, often applied to data having a large number of features and/or complex associations. Currently, there are many tools to facilitate implementing custom ML analyses (e.g. scikit-learn). Interest is also increasing in automated ML packages, which can make it easier for non-experts to apply ML and have the potential to improve model performance. ML permeates most subfields of biomedical research with varying levels of rigor and correct usage. Tremendous opportunities offered by ML are frequently offset by the challenge of assembling comprehensive analysis pipelines, and the ease of ML misuse. In this work we have laid out and assembled a complete, rigorous ML analysis pipeline focused on binary classification (i.e. case/control prediction), and applied this pipeline to both simulated and real world data. At a high level, this \u2019automated\u2019 but customizable pipeline includes a) exploratory analysis, b) data cleaning and transformation, c) feature selection, d) model training with 9 established ML algorithms, each with hyperparameter optimization, and e) thorough evaluation, including appropriate metrics, statistical analyses, and novel visualizations. This pipeline organizes the many subtle complexities of ML pipeline assembly to illustrate best practices to avoid" +"---\nabstract: |\n The semantics and the recursive execution model of Prolog make it very natural to express language interpreters in form of AST (Abstract Syntax Tree) interpreters where the execution follows the tree representation of a program. An alternative implementation technique is that of bytecode interpreters. These interpreters transform the program into a compact and linear representation before evaluating it and are generally considered to be faster and to make better use of resources.\n\n In this paper, we discuss different ways to express the control flow of interpreters in Prolog and present several implementations of AST and bytecode interpreters. On a simple language designed for this purpose, we evaluate whether techniques best known from imperative languages are applicable in Prolog and how well they perform. Our ultimate goal is to assess which interpreter design in Prolog is the most efficient as we intend to apply these results to a more complex language. However, we believe the analysis in this paper to be of more general interest.\nauthor:\n- 'Philipp K\u00f6rner $^\\textrm{\\Letter}$ [ ]{}, David Schneider and Michael Leuschel [ ]{}'\nbibliography:\n- 'references.bib'\ntitle: |\n On the Performance of\\\n Bytecode Interpreters in Prolog\n---\n\nIntroduction\n============\n\nWriting simple language" +"---\nabstract: 'This paper studies several aspects of symbolic ([*i.e.*]{}\u00a0subshift) factors of $\\mathcal{S}$-adic subshifts of finite alphabet rank. First, we address a problem raised in [@DDPM20] about the topological rank of symbolic factors of $\\mathcal{S}$-adic subshifts and prove that this rank is at most the one of the extension system, improving results from [@E20] and [@GH2020]. As a consequence of our methods, we prove that finite topological rank systems are coalescent. Second, we investigate the structure of fibers $\\pi^{-1}(y)$ of factor maps $\\pi\\colon(X,T)\\to(Y,S)$ between minimal $\\cS$-adic subshifts of finite alphabet rank and show that they have the same finite cardinality for all $y$ in a residual subset of $Y$. Finally, we prove that the number of symbolic factors (up to conjugacy) of a fixed subshift of finite topological rank is finite, thus extending Durand\u2019s similar theorem on linearly recurrent subshifts [@durand_2000].'\naddress:\n- 'Departamento de Ingenier\u00eda Matem\u00e1tica and Centro de Modelamiento Matem\u00e1tico, Universidad de Chile, Beauchef 851, Santiago, Chile.'\n- 'Laboratoire Ami\u00e9nois de Math\u00e9matiques Fondamentales et Appliqu\u00e9es, CNRS-UMR 7352, Universit\u00e9 de Picardie Jules Verne, 33 rue Saint Leu, 80039 Amiens cedex 1, France.'\nauthor:\n- Basti\u00e1n Espinoza\nbibliography:\n- 'biblio.bib'\ntitle: 'Symbolic factors of $\\cS$-adic subshifts of finite alphabet" +"---\nabstract: 'This paper examines the challenging problem of learning representations of entities and relations in a complex multi-relational knowledge graph. We propose **[HittER]{}**, a **Hi**erarchical **T**ransformer model **t**o jointly learn **E**ntity-relation composition and **R**elational contextualization based on a source entity\u2019s neighborhood. Our proposed model consists of two different Transformer blocks: the bottom block extracts features of each entity-relation pair in the local neighborhood of the source entity and the top block aggregates the relational information from outputs of the bottom block. We further design a masked entity prediction task to balance information from the relational context and the source entity itself. Experimental results show that [HittER]{} achieves new state-of-the-art results on multiple link prediction datasets. We additionally propose a simple approach to integrate [HittER]{} into BERT and demonstrate its effectiveness on two Freebase factoid question answering datasets.'\nauthor:\n- |\n Sanxing Chen[[^1]]{}\\\n University of Virginia\\\n [sc3hn@virginia.edu]{}\\\n Xiaodong Liu, Jianfeng Gao\\\n Microsoft Research\\\n [{xiaodl,jfgao}@microsoft.com]{} Jian Jiao, Ruofei Zhang\\\n Microsoft Bing Ads\\\n [{jiajia,bzhang}@microsoft.com]{}\\\n Yangfeng Ji\\\n University of Virginia\\\n [yangfeng@virginia.edu]{}\nbibliography:\n- 'anthology.bib'\n- 'my.bib'\ntitle:\n- Knowledge Graph Embeddings from Hierarchical Transformers\n- '[HittER]{}: Hierarchical Transformers for Knowledge Graph Embeddings'\n---\n\n=1\n\nIntroduction\n============\n\nKnowledge graphs (KG) are a major form" +"---\nabstract: 'Image sharing on online social networks (OSNs) has become an indispensable part of daily social activities, but it has also increased the risk of privacy invasion. An online image can reveal various types of sensitive information, prompting the public to rethink individual privacy needs in OSN image sharing critically. However, the interaction of images and OSN makes the privacy issues significantly complicated. The current real-world solutions for privacy management fail to provide adequate personalized, accurate and flexible privacy protection. Constructing a more intelligent environment for privacy-friendly OSN image sharing is urgent in the near future. Meanwhile, given the dynamics in both users\u2019 privacy needs and OSN context, a comprehensive understanding of OSN image privacy throughout the entire sharing process is preferable to any views from a single side, dimension or level. To fill this gap, we contribute a survey of \u201cprivacy intelligence\u201d that targets modern privacy issues in dynamic OSN image sharing from a user-centric perspective. Specifically, we present the important properties and a taxonomy of OSN image privacy, along with a high-level privacy analysis framework based on the lifecycle of OSN image sharing. The framework consists of three stages with different principles of privacy by design. At" +"---\nauthor:\n- 'Benjamin Lee, Xiaoyun Hu, Maxime Cordeil, Arnaud Prouzeau, Bernhard Jenny and Tim Dwyer'\nbibliography:\n- 'bibliography.bib'\ntitle: 'Shared Surfaces and Spaces: Collaborative Data Visualisation in a Co-located Immersive Environment'\n---\n\nRapid advances in virtual and augmented reality (VRAR) technologies offer exciting possibilities for data visualisation. In the future, the places where we work together\u2014especially for exploring and understanding data\u2014will likely differ from our current desktops and meeting rooms. It is, however, difficult to predict precisely what these places for collaborative immersive analytics will look like.\n\nIn lab environments, researchers have invested significant efforts in building collaborative visualisation environments that offer unconventional arrangements of displays, such as wall-sized displays, interactive tabletops, monitors on adjustable stands, and displays projected onto surfaces. More recently, there have been efforts to use AR to extend these displays. An example for individual use is DesignAR, which demonstrates the use of a Microsoft HoloLens with an interactive surface for the creation of 3D models [@Reipschlager:2019:DI3]. Specifically for collaborative visualisation, Augmented Reality above the Tabletop (ART) uses immersive parallel coordinates visualisations floating above an interactive tabletop which acts as the interaction surface [@Butscher:2018:CTO]. The DataSpace environment [@Cavallo:2019:DRH] and its deployed Immersive Insights system [@Cavallo:2019:IIH] combines" +"---\nabstract: 'Question classification (QC) is a prime constituent of automated question answering system. The work presented here demonstrates that the combination of multiple models achieve better classification performance than those obtained with existing individual models for the question classification task in Bengali. We have exploited state-of-the-art multiple model combination techniques, i.e., ensemble, stacking and voting, to increase QC accuracy. Lexical, syntactic and semantic features of Bengali questions are used for four well-known classifiers, namely Na\u00efve Bayes, kernel Na\u00efve Bayes, Rule Induction, and Decision Tree, which serve as our base learners. Single-layer question-class taxonomy with 8 coarse-grained classes is extended to two-layer taxonomy by adding 69 fine-grained classes. We carried out the experiments both on single-layer and two-layer taxonomies. Experimental results confirmed that classifier combination approaches outperform single classifier classification approaches by 4.02% for coarse-grained question classes. Overall, the stacking approach produces the best results for fine-grained classification and achieves 87.79% of accuracy. The approach presented here could be used in other Indo-Aryan or Indic languages to develop a question answering system.'\nauthor:\n- |\n Somnath Banerjee\\\n CSE Department\\\n Jadavpur University, India\\\n Sudip Kumar Naskar\\\n CSE Department\\\n Jadavpur University, India\\\n Paolo Rosso\\\n PRHLT Research Center\\\n Universitat Polit\u00e8cnica de Val\u00e8ncia, Spain\\" +"---\nabstract: 'Emotions play a critical role in our everyday lives by altering how we perceive, process and respond to our environment. Affective computing aims to instill in computers the ability to detect and act on the emotions of human actors. A core aspect of any affective computing system is the classification of a user\u2019s emotion. In this study we present a novel methodology for classifying emotion in a conversation. At the backbone of our proposed methodology is a pre-trained Language Model (LM), which is supplemented by a Graph Convolutional Network (GCN) that propagates information over the predicate-argument structure identified in an utterance. We apply our proposed methodology on the IEMOCAP and Friends data sets, achieving state-of-the-art performance on the former and a higher accuracy on certain emotional labels on the latter. Furthermore, we examine the role context plays in our methodology by altering how much of the preceding conversation the model has access to when making a classification.'\nauthor:\n- 'Connor T. Heaton'\n- 'David M. Schwartz'\nbibliography:\n- 'sample-base.bib'\ntitle: |\n Language Models as Emotional Classifiers\\\n for Textual Conversation\n---\n\nIntroduction\n============\n\nEmotions play a critical role in our everyday lives. They can alter how we perceive, process" +"---\nabstract: 'We carry out a theoretical investigation of overpressurized superfluid phases of $^4$He by means of quantum Monte Carlo (QMC) simulations. As a function of density, we study structural and superfluid properties, and estimate the energy of the roton excitation by inverting imaginary-time density correlation functions computed by QMC, using Maximum Entropy. We estimate the pressure at which the roton energy vanishes to be about 100 bars, which we identify with the spinodal density, i.e., the upper limit for the existence of a metastable superfluid phase.'\nauthor:\n- Youssef Kora and Massimo Boninsegni\nbibliography:\n- 'biblio.bib'\ntitle: Roton excitation in overpressurized superfluid $^4$He\n---\n\nIntroduction\n============\n\nHelium is the only element in nature that does not crystallize at zero temperature under the pressure of its own vapor; instead, its thermodynamic equilibrium phase is a liquid capable of flowing without dissipation ([*superfluid*]{}). A pressure of around 25 bars must be applied in order to stabilize a hexagonal closed-packed crystalline phase. There is now consensus that, upon crystallizing, the system loses its superfluid properties [@colloquium].\\\nIt is possible, however, to realize experimentally metastable liquid phases of helium at pressures higher than that of crystallization [@balibar; @werner]. This allows one to study" +"---\nabstract: 'We present a numerical study on non-Fermi liquid behaviour of a three dimensional system. The Hubbard model in a cubic lattice is simulated by the dynamical cluster approximation, in particular the quasi-particle weight is calculated at finite dopings for a range of temperatures. Near the putative quantum critical point, we find evidence of a separatrix at a finite doping which separates the Fermi liquid from non-Fermi liquid as the doping increases. Our results suggest that a marginal Fermi liquid and possibly a quantum critical point should exist in the three dimensions interacting Fermi system.'\nauthor:\n- Samuel Kellar\n- 'Ka-Ming Tam'\nbibliography:\n- 'apssamp.bib'\ntitle: 'Non-Fermi Liquid Behaviour in the Three Dimensional Hubbard Model'\n---\n\n\\[sec:level1\\]Introduction\n==========================\n\nThe theory of the Fermi liquid is an important milestone of condensed matter physics. [@Landau1956; @Landau1957; @Landau1959] It encapsulates almost all metallic interacting fermionic systems. The fundamental assumption is that the interacting system can be obtained by adiabatically turning on the interaction. All the quantum numbers of the non-interacting system remain intact, specifically the momentum remains a good quantum number for characterizing excitations.\n\nThere are notable exceptions to the Fermi liquid which have been discovered over time. The most prominent is" +"---\nabstract: 'In this short note we study how well a Gaussian distribution can be approximated by distributions supported on $[-a,a]$. Perhaps, the natural conjecture is that for large $a$ the almost optimal choice is given by truncating the Gaussian to $[-a,a]$. Indeed, such approximation achieves the optimal rate of $e^{-\\Theta(a^2)}$ in terms of the $L_\\infty$-distance between characteristic functions. However, if we consider the $L_\\infty$-distance between Laplace transforms on a complex disk, the optimal rate is $e^{-\\Theta(a^2 \\log a)}$, while truncation still only attains $e^{-\\Theta(a^2)}$. The optimal rate can be attained by the Gauss-Hermite quadrature. As corollary, we also construct a \u201csuper-flat\u201d Gaussian mixture of $\\Theta(a^2)$ components with means in $[-a,a]$ and whose density has all derivatives bounded by $e^{-\\Omega(a^2 \\log(a))}$ in the $O(1)$-neighborhood of the origin.'\nauthor:\n- 'Yury Polyanskiy and Yihong Wu[^1]'\ntitle: Note on approximating the Laplace transform of a Gaussian on a complex disk\n---\n\nApproximating the Gaussian\n==========================\n\nWe study the best approximation of a Gaussian distribution by compact support measures, in the sense of the uniform approximation of the Laplace transform on a complex disk. Let $L_\\pi(z) = \\int_{\\ensuremath{\\mathbb{R}}}d\\pi(y) e^{zy}$ be the Laplace transform, $z \\in \\mathbb{C}$, of the measure $\\pi$ and $\\Psi_\\pi(t) \\eqdef" +"---\nabstract: 'The structural, magnetic and dielectric properties have been investigated in 3$d$-5$d$ based double perovskite Sr$_2$FeIrO$_6$ thin films deposited by pulse laser deposition technique. To understand the effect of strain, epitaxial films are grown with varying thickness as well as on different substrates i.e., SrTiO$_3$ (100) and LaAlO$_3$ (100). The films with highest thickness are found to be more relaxed. Atomic force microscope images indicate all films are of good quality where grain sizes increase with increase in film thickness. X-ray absorption spectroscopy measurements indicate a Ir$^{5+}$ charge state in present films while providing a detailed picture of hybridization between Fe/Ir-$d$ and O-$p$ orbitals. The bulk antiferromagnetic transition is retained in films though the transition temperature shifts to higher temperature. Both dielectric constant ($\\epsilon_r$) and loss ($\\tan\\delta$) show change around the magnetic ordering temperatures of bulk Sr$_2$FeIrO$_6$ indicating a close relation between dielectric and magnetic behaviors. A Maxwell-Wagner type relaxation is found to follow over whole frequency range down to low temperature in present film. On changing the substrate i.e., LaAlO$_3$ (100), the $\\epsilon_r(T)$ and ($\\tan\\delta(T)$) show almost similar behavior but $\\epsilon_r$ shows a higher value which is due to an increased strain coming from high mismatch of lattice" +"---\nabstract: 'Smartphone is the most successful consumer electronic product in today\u2019s mobile social network era. The smartphone camera quality and its image post-processing capability is the dominant factor that impacts consumer\u2019s buying decision. However, the quality evaluation of photos taken from smartphones remains a labor-intensive work and relies on professional photographers and experts. As an extension of the prior CNN-based NR-IQA approach, we propose a multi-task deep CNN model with scene type detection as an auxiliary task. With the shared model parameters in the convolution layer, the learned feature maps could become more scene-relevant and enhance the performance. The evaluation result shows improved SROCC performance compared to traditional NR-IQA methods and single task CNN-based models.'\naddress: |\n Department of Computer Science and Information Engineering,\\\n National Taiwan University, Taiwan\\\n E-mail: $^1$chenhsiu48@cmlab.csie.ntu.edu.tw, $^2$wjl@cmlab.csie.ntu.edu.tw\ntitle: 'Multi-task deep CNN model for no-reference image quality assessment on smartphone camera photos'\n---\n\n\u0141[[L]{}]{}\n\nImage quality assessment, No-reference IQA, Convolutional neural networks, Smartphone camera photo.\n\nIntroduction {#sec:intro}\n============\n\nImage quality assessment (IQA) methods are developed to automatically to predict image quality without human subjective judgment, which is known to be costly and time-consuming. It was evident that various image distortions such as blur, noise, and JPEG" +"---\nabstract: 'In the information age, a secure and stable network environment is essential and hence intrusion detection is critical for any networks. In this paper, we propose a *self-organizing map assisted deep autoencoding Gaussian mixture model (SOM-DAGMM)* supplemented with well-preserved input space topology for more accurate network intrusion detection. The deep autoencoding Gaussian mixture model comprises a compression network and an estimation network which is able to perform unsupervised joint training. However, the code generated by the autoencoder is inept at preserving the topology of the input space, which is rooted in the bottleneck of the adopted deep structure. A self-organizing map has been introduced to construct SOM-DAGMM for addressing this issue. The superiority of the proposed SOM-DAGMM is empirically demonstrated with extensive experiments conducted upon two datasets. Experimental results show that SOM-DAGMM outperforms state-of-the-art DAGMM on all tests, and achieves up to 15.58% improvement in F1 score and with better stability.'\nauthor:\n- 'Yang Chen, Nami Ashizawa, Seanglidet Yean, Chai Kiat Yeo, Naoto Yanai [^1] [^2] [^3]'\nbibliography:\n- 'IEEEabrv.bib'\n- 'Bib0118.bib'\ntitle: 'Self-Organizing Map assisted Deep Autoencoding Gaussian Mixture Model for Intrusion Detection'\n---\n\nIntrusion Detection, Anomaly Detection, Self-Organizing Map, Input Space Topology, Deep Autoencoding Gaussian Mixture" +"---\nabstract: 'We suggest a new protocol for the information reconciliation stage of quantum key distribution based on polar codes. The suggested approach is based on the blind technique, which is proved to be useful for low-density parity-check (LDPC) codes. We show that the suggested protocol outperforms the blind reconciliation with LDPC codes, especially when there are high fluctuations in quantum bit error rate (QBER).'\nauthor:\n- 'E.O. Kiktenko'\n- 'A.O. Malyshev'\n- 'A.K. Fedorov'\ntitle: Blind information reconciliation with polar codes for quantum key distribution\n---\n\nIntroduction\n============\n\nQuantum key distribution (QKD) allows growing a secret key between two legitimate users connected by a quantum and authenticated classical channels\u00a0[@Gisin2002; @Scarani2009; @Lo2014; @Lo2016]. The security of QKD is based on the laws of quantum physics and it is guaranteed to be secure against any unforeseen technological developments, such as quantum computing\u00a0[@Shor1997].\n\nA workflow of QKD devices can be divided into two phases\u00a0[@Gisin2002; @Scarani2009]. During the first phase, QKD devices encode information in quantum bits (qubits), transmit and measure them, and then discard the records about preparation and measurement events occurred in incompatible bases. As a result of this phase, two legitimate parties, Alice and Bob, obtain so-called" +"---\nabstract: 'Mixed-initiative systems allow users to interactively provide feedback to potentially improve system performance. Human feedback can correct model errors and update model parameters to dynamically adapt to changing data. Additionally, many users desire the ability to have a greater level of control and fix perceived flaws in systems they rely on. However, how the ability to provide feedback to autonomous systems influences user trust is a largely unexplored area of research. Our research investigates how the act of providing feedback can affect user understanding of an intelligent system and its accuracy. We present a controlled experiment using a simulated object detection system with image data to study the effects of interactive feedback collection on user impressions. The results show that providing human-in-the-loop feedback lowered both participants\u2019 trust in the system and their perception of system accuracy, regardless of whether the system accuracy improved in response to their feedback. These results highlight the importance of considering the effects of allowing end-user feedback on user trust when designing intelligent systems.'\nauthor:\n- |\n Donald R. Honeycutt, Mahsan Nourani, Eric D. Ragan\\\n University of Florida, Gainesville, Florida\\\n dhoneycutt@ufl.edu, mahsannourani@ufl.edu, eragan@ufl.edu\\\nbibliography:\n- 'bibliography.bib'\ntitle: 'Soliciting Human-in-the-Loop User Feedback for Interactive Machine" +"---\nabstract: 'For each $n \\ge 1$, let ${\\mathrm{d}}^n=(d^{n}(i),1 \\le i \\le n)$ be a sequence of positive integers with even sum $\\sum_{i=1}^n d^n(i) \\ge 2n$. Let $(G_n,T_n,\\Gamma_n)$ be uniformly distributed over the set of simple graphs $G_n$ with degree sequence ${\\mathrm{d}}^n$, endowed with a spanning tree $T_n$ and rooted along an oriented edge $\\Gamma_n$ of $G_n$ which is not an edge of $T_n$. Under a finite variance assumption on degrees in $G_n$, we show that, after rescaling, $T_n$ converges in distribution to the Brownian continuum random tree as $n \\to \\infty$. Our main tool is a new version of Pitman\u2019s additive coalescent [@MR1673928], which can be used to build both random trees with a fixed degree sequence, and random tree-weighted graphs with a fixed degree sequence. As an input to the proof, we also derive a Poisson approximation theorem for the number of loops and multiple edges in the superposition of a fixed graph and a random graph with a given degree sequence sampled according to the configuration model; we find this to be of independent interest.'\naddress: 'Department of Mathematics and Statistics, McGill University, Montr\u00e9al, Canada'\nauthor:\n- 'Louigi Addario-Berry'\n- Jordan Barrett\ndate: 'August 27, 2020; revised" +"---\nabstract: 'Soft electro-active (SEA) materials can be designed and manufactured with gradients in their material properties, to modify and potentially improve their mechanical response in service. Here, we investigate the nonlinear response of, and axisymmetric wave propagation in a soft circular tube made of a functionally graded SEA material and subject to several biasing fields, including axial pre-stretch, internal/external pressure, and through-thickness electric voltage. We take the energy density function of the material to be of the Mooney-Rivlin ideal dielectric type, with material parameters changing linearly along the radial direction. We employ the general theory of nonlinear electro-elasticity to obtain explicitly the nonlinear response of the tube to the applied fields. To study wave propagation under inhomogeneous biasing fields, we formulate the incremental equations of motion within the state-space formalism. We adopt the approximate laminate technique to derive the analytical dispersion relations for the small-amplitude torsional and longitudinal waves superimposed on a finitely deformed state. Comprehensive numerical results then illustrate that the material gradients and biasing fields have significant influences on the static nonlinear response and on the axisymmetric wave propagation in the tube. This study lays the groundwork for designing SEA actuators with improved performance, for tailoring tunable" +"---\nbibliography:\n- 'refs.bib'\ndate: June 2020\nnocite: '[@clrs; @cz; @nm; @pt1; @pt2]'\ntitle: |\n Sorting an Array Using the Topological Sort\\\n of a Corresponding Comparison Graph\n---\n\nIntroduction\n============\n\nMost sorting algorithms run a multitude of array comparisons, and from those results we decide how to manipulate the elements to eventually achieve a sorted ordering of elements. This process can be implemented and infused with all kinds of data structures. In particular, directed graphs are a great way to structure this problem, and this allows us to look at sorting in different light and realize new methods of sorting.\n\nWe can represent every element as a vertex and the result of every comparison as an arc. Thus we can construct a graph that essentially stores all the comparisons made. In fact, we can mathematically represent these comparisons as an order relation: construct an arc if and only if the origin is less than the terminus (in the case of distinct array elements). Now we must decipher such a graph, i.e. to find some meaning to all those arcs that we have created. Our end goal is to achieve a sort of our input array, and in this paper we" +"Formalizing our Approach {#sec:formalization}\n========================\n\nWe opt for a relational algebra representation to be able to make formal statements (e.g. proofs) about the different operators. Before fully formalizing our approach, we first introduce a map operator ($\\chi$) as an addition to the standard selection, projection, and join operators in traditional relational algebra. This operator is used for materializing values as described in [@BMG93]. We also introduce our new interval-timestamp join ($\\JoinByS$), allowing us to replace costly non-equi-join predicates with an operator that, as we show later, can be implemented much more efficiently.\n\nThe map operator $\\Map{a}{e}(\\Rel r)$ evaluates the expression $e$ on each tuple of $\\Rel r$ and concatenates the result to the tuple as attribute $a$: $$\\Map{a}{e}(\\Rel r) = \\SetBuilder{\n r \\circ [a:e(r)]\n }{\n r \\in \\Rel r\n }.$$ If the attribute $a$ already exists in a tuple, we instead overwrite its value.\n\nThe interval-timestamp join $\\Rel r \\JoinByS \\Rel s$ matches the intervals in the tuples of relation $\\Rel r$ with the timestamps of the tuples in $\\Rel\ns$. It comes in two flavors, depending on the timestamp chosen for $s$, i.e., $T_s$ or $T_e$. So, the interval-starting-timestamp join is defined as $$\\Rel r \\JoinByTs^\\theta \\Rel s =" +"---\nabstract: 'The voice conversion challenge is a bi-annual scientific event held to compare and understand different voice conversion (VC) systems built on a common dataset. In 2020, we organized the third edition of the challenge and constructed and distributed a new database for two tasks, intra-lingual semi-parallel and cross-lingual VC. After a two-month challenge period, we received 33 submissions, including 3 baselines built on the database. From the results of crowd-sourced listening tests, we observed that VC methods have progressed rapidly thanks to advanced deep learning methods. In particular, speaker similarity scores of several systems turned out to be as high as target speakers in the intra-lingual semi-parallel VC task. However, we confirmed that none of them have achieved human-level naturalness yet for the same task. The cross-lingual conversion task is, as expected, a more difficult task, and the overall naturalness and similarity scores were lower than those for the intra-lingual conversion task. However, we observed encouraging results, and the MOS scores of the best systems were higher than 4.0. We also show a few additional analysis results to aid in understanding cross-lingual VC better.'\naddress: |\n $^1$National Institute of Informatics, Japan $^2$Nagoya University, Japan\\\n $^3$National University of Singapore," +"---\nabstract: 'In this paper we obtain the necessary condition for the existence of perfect $k$-colorings (equitable $k$-partitions) in Hamming graphs $H(n,q)$, where $q=2,3,4$ and Doob graphs $D(m,n)$. As an application, we prove the non-existence of extended perfect codes in $H(n,q)$, where $q=3,4$, $n>q+2$, and in $D(m,n)$, where $2m+n>6$.'\nauthor:\n- 'Evgeny Bespalov [^1]'\nbibliography:\n- 'k.bib'\ntitle:\n- Extended perfect codes\n- 'On the non-existence of extended perfect codes and some perfect colorings[^2] '\n---\n\nIntroduction\n============\n\nA $k$-coloring of a graph $G=(V,E)$ is a surjective function from the vertex set $V$ into a color set of cardinality $k$, usually denoted by $\\{0,1,\\ldots,k-1\\}$. This coloring is called perfect if for any $i,j$ the number of vertices of color $j$ in the neighbourhood of vertex $x$ of color $i$ depends only on $i$ and $j$, but not on the choice of $x$. An equivalent concept is an equitable $k$-partition, which is a partition of the vertex set $V$ into cells $V_0,\\ldots,V_{k-1}$, where these cells are the preimages of the colors of some perfect $k$-coloring. Also the perfect colorings are the particular cases of the perfect structures, see e.g. [@Tar:perfstruct]. In this paper, we consider perfect colorings in Hamming graphs $H(n,q)$" +"---\nabstract: 'Reference tracking systems involve a plant that is stabilized by a local feedback controller and a command center that indicates the reference set-point the plant should follow. Typically, these systems are subject to limitations such as disturbances, systems delays, constraints, uncertainties, underperforming controllers, and unmodeled parameters that do not allow them to achieve the desired performance. In situations where it is not possible to redesign the inner-loop system, it is usual to incorporate an outer-loop control that instructs the system to follow a modified reference path such that the resultant path is close to the ideal one. Typically, strategies to design the outer-loop control need to know a model of the system, which can be an unfeasible task. In this paper, we propose a framework based on deep reinforcement learning that can learn a policy to generate a modified reference that improves the system\u2019s performance in a non-invasive and model-free fashion. To illustrate the effectiveness of our approach, we present two challenging cases in engineering: a flight control with a pilot model that includes human reaction delays, and a mean-field control problem for a massive number of space-heating devices. The proposed strategy successfully designs a reference signal that" +"---\nabstract: 'Using systematic effective field theory, we explore the properties of antiferromagnetic films subjected to magnetic and staggered fields that are either mutually aligned or mutually orthogonal. We provide low-temperature series for the entropy density in either case up to two-loop order. Invoking staggered, uniform and sublattice magnetizations of the bipartite antiferromagnet, we investigate the subtle order-disorder phenomena in the spin arrangement, induced by temperature, magnetic and staggered fields \u2013 some of which are quite counterintuitive. In the figures we focus on the spin-$\\frac{1}{2}$ square-lattice antiferromagnet, but our results are valid for any other bipartite two-dimensional lattice.'\nauthor:\n- |\n Christoph P.\u00a0Hofmann$^a$\\\n \\\n \\\n \\\ntitle: Spin Order and Entropy in Antiferromagnetic Films Subjected to Magnetic Fields\n---\n\nIntroduction {#Intro}\n============\n\nThe present work is part of an ongoing program the aim of which it is to systematically analyze the thermodynamic properties of antiferromagnetic systems using magnon effective field theory. While three-dimensional antiferromagnets have been discussed within this perspective in Refs.\u00a0[@HL90; @Leu94a; @Hof99a; @Hof99b] \u2013 and more recently in Refs.\u00a0[@Hof17b; @BH17; @BH19; @Hof20c] \u2013 here we continue exploring antiferromagnetic films.\n\nEarlier effective field theory based papers on antiferromagnetic films include Refs.\u00a0[@CHN89; @Fis89; @HL90; @HN93; @Hof10]." +"---\nabstract: 'The two-dimensional nature of engineered transition-metal ultra-thin oxide films offers a large playground of yet to be fully understood physics. Here, we study pristine SrVO$_3$ monolayers that have recently been predicted to display a variety of magnetic and orbital orders. Above all ordering temperatures, we find that the associated non-local fluctuations lead to a momentum differentiation in the self-energy, particularly in the scattering rate. In the one-band 2D Hubbard model, momentum-selectivity on the Fermi surface (\u201c$k=k_F$\u201d) is known to lead to pseudogap physics. Here instead, in the multi-orbital case, we evidence a differentiation between momenta on the occupied (\u201c$kk_F$\u201d) of the Fermi surface. Our work, based on the dynamical vertex approximation, complements the understanding of spectral signatures of non-local fluctuations, calls to (re)examine other ultra-thin oxide films and interfaces with methods beyond dynamical mean-field theory, and may point to correlation-enhanced thermoelectric effects.'\nauthor:\n- Matthias Pickem\n- 'Jan M.\u00a0Tomczak'\n- Karsten Held\ntitle: |\n Particle-hole asymmetric lifetimes promoted by spin and orbital fluctuations\\\n in SrVO$_3$ monolayers \n---\n\nIntroduction\n============\n\nIn the vicinity of phase transitions and in low-dimensional systems, non-local long-range fluctuations are known to proliferate. These are not only crucial for" +"---\nabstract: 'We introduce microscopic and macroscopic stochastic traffic models including traffic accidents. The microscopic model is based on a Follow-the-Leader approach whereas the macroscopic model is described by a scalar conservation law with space dependent flux function. Accidents are introduced as interruptions of a deterministic evolution and are directly linked to the traffic situation. Based on a Lax-Friedrichs discretization convergence of the microscopic model to the macroscopic model is shown. Numerical simulations are presented to compare the above models and show their convergence behaviour.'\nauthor:\n- 'Simone G\u00f6ttlich, Thomas Schillinger'\nbibliography:\n- 'literature.bib'\ndate: \ntitle: |\n Microscopic and Macroscopic Traffic Flow Models\\\n including Random Accidents\n---\n\n[**AMS Classification.**]{} 35L65, 90B20, 65M06 [**Keywords.**]{} microscopic and macroscopic traffic flow models, random accidents, numerical convergence analysis, discretization schemes.\n\nIntroduction\n============\n\nThroughout the world, traffic accidents are a serious problem and causes considerable societal costs. So there is a great interest in understanding how accidents may happen and how they may be reduced. Mathematical models can help at least to analyze traffic scenarios and probably allow for a reliable prediction. In particular, there exist a variety of different mathematical approaches to model traffic accidents, for instance ordinary differential equations [@accidents6], kinetic models [@kinMod]," +"---\nabstract: 'Quantum localization (single-body or many-body) comes with the emergence of local conserved quantities \u2014 whose conservation is precisely at the heart of the absence of transport through the system. In the case of fermionic systems and $S=1/2$ spin models, such conserved quantities take the form of effective two-level systems, called $l$-bits. While their existence is the defining feature of localized phases, their direct experimental observation remains elusive. Here we show that strongly localized $l$-bits bear a dramatic universal signature, accessible to state-of-the-art quantum simulators, in the form of periodic cusp singularities in the Loschmidt echo following a quantum quench from a N\u00e9el/charge-density-wave state. Such singularities are perfectly captured by a simple model of Rabi oscillations of an ensemble of independent two-level systems, which also reproduces the short-time behavior of the entanglement entropy and the imbalance dynamics. In the case of interacting localized phases, the dynamics at longer times shows a sharp crossover to a faster decay of the Loschmidt echo singularities, offering an experimentally accessible signature of the interactions between $l$-bits.'\naddress:\n- '$^1$ Department of Physics, University of Warwick, Coventry, CV4 7AL, UK'\n- '$^2$ Univ de Lyon, Ens de Lyon, Univ Claude Bernard, and CNRS, Laboratoire" +"---\nabstract: |\n A-theorists and B-theorists debate whether the \u201cNow\u201d is metaphysically distinguished from other time slices. Analogously, one may ask whether the \u201cI\u201d is metaphysically distinguished from other perspectives. Few philosophers would answer the second question in the affirmative. An exception is Caspar Hare, who has devoted two papers and a book to arguing for such a positive answer. In this paper, I argue that those who answer the first question in the affirmative \u2013 A-theorists \u2013 should also answer the second question in the affirmative. This is because key arguments in favor of the A-theory are more effective as arguments in favor of the resulting combined position, and key arguments against the A-theory are ineffective against the combined position.\\\n [**Keywords:**]{} metaphysics, philosophy of time, philosophy of self.\nauthor:\n- 'Vincent Conitzer[^1]'\ntitle: 'The Personalized A-Theory of Time and Perspective[^2]'\n---\n\n=\n\nIntroduction\n============\n\nIn a series of unconventional but lucid works, Caspar Hare has laid out and defended a theory of [*egocentric presentism*]{} (or, in his more recent work, [*perspectival realism*]{}), in which a distinguished individual\u2019s experiences are [*present*]{} in a way that the experiences of others are not\u00a0[@Hare07:Self; @Hare09:On; @Hare10:Realism]. Closely related ideas appear in the" +"---\nabstract: 'Measurements of redshifted 21-cm emission of neutral hydrogen at $\\lesssim30$\u00a0MHz have the potential to probe the cosmic \u201cdark ages,\u201d a period of the universe\u2019s history that remains unobserved to date. Observations at these frequencies are exceptionally challenging because of bright Galactic foregrounds, ionospheric contamination, and terrestrial radio-frequency interference. Very few sky maps exist at $\\lesssim30$\u00a0MHz, and most have modest resolution. We introduce the Array of Long Baseline Antennas for Taking Radio Observations from the Sub-Antarctic (), a new experiment that aims to image low-frequency Galactic emission with an order-of-magnitude improvement in resolution over existing data. The \u00a0array will consist of antenna stations that operate autonomously, each recording baseband data that will be interferometrically combined offline. The array will be installed on Marion Island and will ultimately comprise 10 stations, with an operating frequency range of 1.2\u2013125\u00a0MHz and maximum baseline lengths of $\\sim20$\u00a0km. We present the \u00a0instrument design and discuss pathfinder observations that were taken from Marion Island during 2018\u20132019.'\naddress: |\n $^{1}$Department of Physics, McGill University, Montr\u00e9al, Quebec H3A 2T8, Canada\\\n $^{2}$School of Mathematics, Statistics, and Computer Science, University of KwaZulu\u2013Natal, Durban 4000, South Africa\\\n $^{3}$Department of Astronomy, University of California, Berkeley, California 94720," +"---\nabstract: 'We analyze recent approaches to quantum Markovianity and how they relate to the proper definition of quantum memory. We point out that the well-known criterion of information backflow may not correctly report character of the memory falsely signaling its quantumness. Therefore, as a complement to the well-known criteria, we propose several concepts of [*elementary dynamical maps*]{}. Maps of this type do not increase distinguishability of states which are indistinguishable by von Neumann measurements in a given basis. Those notions and convexity allows us to define general classes of processes without quantum memory in a weak and strong sense. Finally, we provide a practical characterization of the most intuitive class in terms of the new concept of [*witness of quantum information backflow*]{}.'\nauthor:\n- 'Micha[\u0142]{} Banacki'\n- Marcin Marciniak\n- Karol Horodecki\n- 'Pawe[\u0142]{} Horodecki'\ntitle: Information backflow may not indicate quantum memory\n---\n\n*Introduction.-* Nowadays, due to constant development in both theoretical and experimental branches of quantum information theory, topic of quantum memory become more and more relevant. In particular idea of Markovian evolution (evolution without memory) coming form the theory of quantum open systems [@RH12; @BP07] has been recently studied in extensive way within different frameworks [@BLPV16;" +"---\nabstract: 'The vital statistics of the last century highlight a sharp increment of the average age of the world population with a consequent growth of the number of older people. Service robotics applications have the potentiality to provide systems and tools to support the autonomous and self-sufficient older adults in their houses in everyday life, thereby avoiding the task of monitoring them with third parties. In this context, we propose a cost-effective modular solution to detect and follow a person in an indoor, domestic environment. We exploited the latest advancements in deep learning optimization techniques, and we compared different neural network accelerators to provide a robust and flexible person-following system at the edge. Our proposed cost-effective and power-efficient solution is fully-integrable with pre-existing navigation stacks and creates the foundations for the development of fully-autonomous and self-contained service robotics\u00a0applications.'\nauthor:\n- 'Anna\u00a0Boschi, Francesco\u00a0Salvetti, Vittorio\u00a0Mazzia, and\u00a0Marcello\u00a0Chiaberge[^1]'\ntitle: 'A Cost-Effective Person-Following System for Assistive Unmanned Vehicles with Deep Learning at the Edge'\n---\n\n[Anna Boschi : A Cost-Effective Person-Following System for Assistive Unmanned Vehicles with Deep Learning at the Edge]{}\n\nperson-following; robotics; deep learning; edge AI\n\nIntroduction\n============\n\nPerson-following is a well-known problem in robotic autonomous" +"---\nabstract: 'The systems with small binding energies and widely distributed in space bound-state wave functions are considered. Because the interaction potential is weak and rather localized compared to the characteristic sizes of wave functions of these systems, the problem of an accurate determination of binding energy and wave functions is complicated. An essential part of the study is the development and application of the discrete-variable representation (DVR) method. This method is based on the determination of basis functions and the nodes and weights of a quadrature formula in such way that the values of a function are zero at all these nodes but one. With this representation the time required for calculating the Hamiltonian matrix elements is substantially reduced. The binding energies of several systems consisting of helium and lithium atoms have been obtained using the DVR method.'\nauthor:\n- |\n Vladimir A.\u00a0Timoshenko\\\n Saint-Petersburg State University\\\n `vladimir.timoshenko7@gmail.com`\\\n Evgeny A.\u00a0Yarevsky\\\n Saint-Petersburg State University\\\ntitle: 'Discrete Variable Representation method in the study of few-body quantum systems with non-zero angular momentum'\n---\n\nIntroduction\n============\n\nSystems of particles with small binding energies and wave functions that are widely distributed in space are considered in this work. The study of quantum-mechanical systems" +"---\nabstract: 'Mechanical metamaterials feature engineered microstructures designed to exhibit exotic, and often counter-intuitive, effective behaviour such as negative Poisson\u2019s ratio or negative compressibility. Such a specific response is often achieved through instability-induced transformations of the underlying periodic microstructure into one or multiple patterning modes. Due to a strong kinematic coupling of individual repeating microstructural cells, non-local behaviour and size effects emerge, which cannot easily be captured by classical homogenization schemes. In addition, the individual patterning modes can mutually interact in space as well as in time, while at the engineering scale the entire structure can buckle globally. For efficient numerical predictions of macroscale engineering applications, a micromorphic computational homogenization scheme has recently been developed\u00a0(@Rokos:2019, *J. Mech. Phys. Solids*\u00a0[**123**]{}, 119\u2013137, 2019). Although this framework is in principle capable of accounting for spatial and temporal interactions between individual patterning modes, its implementation relied on a gradient-based quasi-Newton solution technique. This solver is suboptimal because\u00a0(i) it has sub-quadratic convergence, and\u00a0(ii) the absence of Hessians does not allow for proper bifurcation analyses. Given that mechanical metamaterials often rely on controlled instabilities, these limitations are serious. Addressing them will reduce the dependency of the solution on the initial guess by" +"---\nabstract: 'Optical quantum memory\u2014the ability to store photonic quantum states and retrieve them on demand\u2014is an essential resource for emerging quantum technologies and photonic quantum information protocols. Simultaneously achieving high efficiency and high-speed, broadband operation is an important task necessary for enabling these applications. We investigate the optimization of a large class of optical quantum memories based on resonant and near-resonant interaction with ensembles of $\\Lambda$-type level systems with the restriction that the temporal envelope of all optical fields must be Gaussian, which reduces experimental complexity. Through this optimization we demonstrate an experimentally simple path to saturation of the protocol-independent storage efficiency bound that is valid for a wide range of memory bandwidths, including those that are broadband and high-speed. Examining the resulting optimal Gaussian control field parameters, we find a continuous transformation between three physically distinct resonant quantum memory protocols. We compare this Gaussian optimization scheme with standard shape-based optimization.'\naddress: |\n $^1$Department of Physics, University of Illinois at Urbana-Champaign, 1110 West Green Street, Urbana, IL 61801, USA\\\n $^2$Illinois Quantum Information Science and Technology (IQUIST) Center, University of Illinois at Urbana-Champaign, 1101 West Springfield Avenue, Urbana, IL 61801, USA\nauthor:\n- 'Kai Shinbrough$^{1,2}$'\n- 'Benjamin D. Hunt$^{1,2}$'\n-" +"---\nabstract: 'This paper derives the analytic and practicable expression of general solution of vacuum Regge-Wheeler equation via Homotopy Analysis Method.'\nauthor:\n- 'Gihyuk Cho$^{1}$[^1]'\nbibliography:\n- 'references.bib'\ntitle: |\n Analytic expression of perturbations of Schwarzschild spacetime\\\n via Homotopy Analysis Method\n---\n\nIntroduction\n============\n\nSince the blackhole perturbation theory was given its birth from the investigation on the stability of Schwarzschild metric by Regge $\\&$ Wheeler\u00a0[@Regge1957] and Vishveshwara\u00a0[@Vishveshwara1970], there have been much development (See [@Chandrasekhar1985] for a comprehensive review). The development for perturbed Schwarzchild metric could be summarized into the two pieces: (1) the 6 gauge-invariant description\u00a0[@Moncrief1974; @Gerlach1980; @Thompson2017] and (2) the (generalized) Darboux transformation\u00a0[@Chandrasekhar1975; @Glampedakis2017]. The first piece states that 10 components of the metric perturbation compose 6 independent variables which are *gauge invariant* up to linear order gauge transformation, and the resulting 6 coupled equations governing these 6 gauge-invariants, are reduced down to two master equations called Regge-Wheeler and Zerilli equations [@Regge1957; @Zerilli1970].(Regge-Wheeler and Zerilli functions are also gauge invariant.) Although, in the original papers of Regge $\\&$ Wheeler and Zerilli, they chose specific gauges (called Regge-Wheeler and Zerilli gauges) to derive the equations, one can now construct Regge-Wheeler and Zerilli equations in any" +"---\nabstract: 'Object counting, whose aim is to estimate the number of objects from a given image, is an important and challenging computation task. Significant efforts have been devoted to addressing this problem and achieved great progress, yet counting the number of ground objects from remote sensing images is barely studied. In this paper, we are interested in counting dense objects from remote sensing images. Compared with object counting in a natural scene, this task is challenging in the following factors: large scale variation, complex cluttered background, and orientation arbitrariness. More importantly, the scarcity of data severely limits the development of research in this field. To address these issues, we first construct a large-scale object counting dataset with remote sensing images, which contains four important geographic objects: buildings, crowded ships in harbors, large-vehicles and small-vehicles in parking lots. We then benchmark the dataset by designing a novel neural network that can generate a density map of an input image. The proposed network consists of three parts namely attention module, scale pyramid module and deformable convolution module to attack the aforementioned challenging factors. Extensive experiments are performed on the proposed dataset and one crowd counting datset, which demonstrate the challenges of" +"---\nabstract: 'We consider simultaneous and continuous measurement of two noncommutative observables of the system whose commutator is not necessarily a $c$-number. We revisit the Arthurs-Kelly model and generalize it to describe the simultaneous measurement of two observables of the system. Using this generalized model, we continuously measure the system by following the scheme proposed by Scott and Milburn \\[Scott and Milburn, [Phys. Rev. A [**63**]{}, 042101 (2001)](https://doi.org/10.1103/PhysRevA.63.042101)\\]. We find that the unconditioned master equation reduces to the Lindblad form in the continuous limit. In addition, we find that the master equation does not contain a cross term of these two measurements. Finally, we propose a scheme to prepare the state of a two-level system in an external field by feedback control based on the simultaneous, continuous measurement of the two observables.'\nauthor:\n- Chao Jiang\n- Gentaro Watanabe\ntitle: Quantum dynamics under simultaneous and continuous measurement of noncommutative observables\n---\n\nIntroduction \\[sec:intro\\]\n==========================\n\nQuantum feedback control [@H.; @M.; @W.; @K.; @Jacobs2; @H.; @M.; @Wiseman; @Diosi94; @S.; @Lloyd; @Daniel; @Sagawa; @Brif] is a widely employed technique to drive a quantum system to a desired state [@Bushev; @Dotsenko; @Gillett; @Vijay]. Using measurement results to control system parameters, the feedback control technique" +"---\nabstract: 'In this work, we investigate the interactions between the charmed-strange meson ($D_s, D_s^{*}$) in $H$-doublet and the (anti-)charmed-strange meson ($D_{s1}, D_{s2}^{*}$) in $T$-doublet, where the one boson exchange model is adopted by considering the $S$-$D$ wave mixing and the coupled-channel effects. By extracting the effective potentials for the discussed $H_s\\bar{T}_s$ and $H_s{T}_s$ systems, we try to find the bound state solutions for the corresponding systems. We predict the possible hidden-charm hadronic molecular states with hidden strangeness, i.e., the $D_s^{*}\\bar D_{s1}+c.c.$ states with $J^{PC}$=$0^{--}, 0^{-+}$ and the $D_s^{*}\\bar D_{s2}^{*}+c.c.$ states with $J^{PC}$=$1^{--}, 1^{-+}$. Applying the same theoretical framework, we also discuss the $H_s T_s$ systems. Unfortunately, the existence of the open-charm and open-strange molecular states corresponding to the $H_s T_s$ systems can be excluded.'\nauthor:\n- 'Fu-Lai Wang$^{1,2}$'\n- 'Xiang Liu$^{1,2}$[^1]'\ntitle: 'Exotic double-charm molecular states with hidden or open strangeness and around $4.5\\sim 4.7$ GeV'\n---\n\nIntroduction {#sec1}\n============\n\nStudying exotic hadronic states, which are very different from conventional mesons and baryons, is an intriguing research frontier full of opportunities and challenges in hadron physics. As an important part of hadron spectroscopy, exotic state can be as a good platform for deepening our understanding of non-perturbative behavior of" +"---\nauthor:\n- 'J.\u00a0A.\u00a0Aguilar\u2013Saavedra'\n- 'F. R. Joaquim'\n- 'J.\u00a0F.\u00a0Seabra'\ntitle: 'Mass Unspecific Supervised Tagging (MUST) for boosted jets'\n---\n\nIntroduction\n============\n\nThe high-energy frontier of particle physics has been and will continue to be explored in the decades to come at the Large Hadron Collider (LHC), a machine designed to unveil the intricate dynamics of the Standard Model (SM) and search for new physics signals. Being a proton-proton collider, the LHC abundantly produces sprays of hadronised quarks and gluons (jets), stemming mainly from pure Quantum Chromodynamics (QCD) processes. When sufficiently boosted, the hadronic decay products of SM particles like the $W$, $Z$ and Higgs bosons and the top quark become highly collimated yielding single \u2018fat\u2019 jets. This could also happen for new particles decaying hadronically. Actually, multi-jet signals originated from direct or cascade decays of yet unseen particles are predicted in a plethora of theoretical frameworks beyond the SM, ranging from left-right symmetric models\u00a0[@Aguilar-Saavedra:2015iew] to scenarios with warped extra dimensions\u00a0[@Agashe:2016rle; @Agashe:2016kfr]. The complexity of the various possible jet topologies, and the importance of their identification, fostered the development of discrimination techniques to distinguish (signal) jets produced in collimated decays of heavy particles, from" +"---\nabstract: 'Pull-in (or electro-mechanical) instability occurs when a drastic decrease in the thickness of a dielectric elastomer results in electrical breakdown, which limits the applications of dielectric devices. Here we derive the criterions for determining the pull-in instability of dielectrics actuated by different loading methods: voltage-control, charge-control, fixed pre-stress and fixed pre-stretch, by analyzing the free energy of the actuated systems. The Hessian criterion identifies a maximum in the loading curve beyond which the elastomer will stretch rapidly and lose stability, and can be seen as a path to failure. We present numerical calculations for neo-Hookean ideal dielectrics, and obtain the maximum allowable actuation stretch of a dielectric before failure by electrical breakdown. We find that applying a fixed pre-stress or a fixed pre-stretch to a charge-driven dielectric may decrease the stretchability of the elastomer, a scenario which is the opposite of what happens in the case of a voltage-driven dielectric. Results show that a reversible large actuation of a dielectric elastomer, free of the pull-in instability, can be achieved by tuning the actuation method.'\nauthor:\n- |\n Y.P. Su$^{1,2}$, W. Q. Chen$^{1}$, M. Destrade$^{2,1}$\\\n $^1$ Department of Engineering Mechanics,\\\n Zhejiang University, Hangzhou 310027, P.R. China\\\n $^2$ School of" +"---\nabstract: 'In this paper we evaluate the performance of two superadiabatic stimulated Raman adiabatic passage (STIRAP) protocols derived from Gaussian and sin-cos pulses, under dissipation and Ornstein-Uhlenbeck noise in the energy levels. We find that for small amplitudes of Stokes and pump pulses, the population transfer is mainly achieved directly through the counterdiabatic pulse, while for large amplitudes the conventional STIRAP path dominates. This kind of \u201chedging\" leads to a remarkable robustness against dissipation in the lossy intermediate state. For small pulse amplitudes and increasing noise correlation time the performance is decreased, since the dominant counterdiabatic pulse is affected more, while for large pulse amplitudes, where the STIRAP path dominates, the efficiency is degraded more for intermediate correlation times (compared to the pulse duration). For the Gaussian superadiabatic STIRAP protocol we also investigate the effect of delay between pump and Stokes pulses and find that under the presence of noise the performance is improved for increasing delay. We conclude that the Gaussian protocol with suitably chosen delay and the sin-cos protocol perform quite well even under severe noise conditions. The present work is expected to have a broad spectrum of applications, since STIRAP has a crucial role in modern" +"---\nabstract: 'American universities use a procedure based on a rolling six-year graduation rate to calculate statistics regarding their students\u2019 final educational outcomes (graduating or not graduating). As\u00a0an alternative to the six-year graduation rate method, many studies have applied absorbing Markov chains for estimating graduation rates. In both cases, a frequentist approach is used. For\u00a0the standard six-year graduation rate method, the frequentist approach corresponds to counting the number of students who finished their program within six years and dividing by the number of students who entered that year. In the case of absorbing Markov chains, the frequentist approach is used to compute the underlying transition matrix, which is then used to estimate the graduation rate. In this paper, we apply a sensitivity analysis to compare the performance of the standard six-year graduation rate method with that of absorbing Markov chains. Through the analysis, we highlight significant limitations with regards to the estimation accuracy of both approaches when applied to small sample sizes or cohorts at a university. Additionally, we note that the Absorbing Markov chain method introduces a significant bias, which leads to an underestimation of the true graduation rate. To\u00a0overcome both these challenges, we propose and" +"---\nauthor:\n- 'C.\u00a0Guidorzi[^1]'\n- 'M.\u00a0Orlandini'\n- 'F.\u00a0Frontera'\n- 'L.\u00a0Nicastro'\n- 'S.L.\u00a0Xiong'\n- 'J.Y.\u00a0Liao'\n- 'G.\u00a0Li'\n- 'S.N.\u00a0Zhang'\n- 'L.\u00a0Amati'\n- 'E.\u00a0Virgilli'\n- 'S.\u00a0Zhang'\n- 'Q.C.\u00a0Bu'\n- 'C.\u00a0Cai'\n- 'X.L.\u00a0Cao'\n- 'Z.\u00a0Chang'\n- 'L.\u00a0Chen'\n- 'T.X.\u00a0Chen'\n- 'Y.\u00a0Chen'\n- 'Y.P.\u00a0Chen'\n- 'W.W.\u00a0Cui'\n- 'Y.Y.\u00a0Du'\n- 'G.H.\u00a0Gao'\n- 'H.\u00a0Gao'\n- 'M.\u00a0Gao'\n- 'M.Y.\u00a0Ge'\n- 'Y.D.\u00a0Gu'\n- 'J.\u00a0Guan'\n- 'C.C.\u00a0Guo'\n- 'D.W.\u00a0Han'\n- 'Y.\u00a0Huang'\n- 'J.\u00a0Huo'\n- 'S.M.\u00a0Jia'\n- 'W.C.\u00a0Jiang'\n- 'J.\u00a0Jin'\n- 'L.D.\u00a0Kong'\n- 'B.\u00a0Li'\n- 'C.K.\u00a0Li'\n- 'T.P.\u00a0Li'\n- 'W.\u00a0Li'\n- 'X.\u00a0Li'\n- 'X.B.\u00a0Li'\n- 'X.F.\u00a0Li'\n- 'Z.W.\u00a0Li'\n- 'X.H.\u00a0Liang'\n- 'B.S.\u00a0Liu'\n- 'C.Z.\u00a0Liu'\n- 'H.X.\u00a0Liu'\n- 'H.W.\u00a0Liu'\n- 'X.J.\u00a0Liu'\n- 'F.J.\u00a0Lu'\n- 'X.F.\u00a0Lu'\n- 'Q.\u00a0Luo'\n- 'T.\u00a0Luo'\n- 'R.C.\u00a0Ma'\n- 'X.\u00a0Ma'\n- 'B.\u00a0Meng'\n- 'Y.\u00a0Nang'\n- 'J.Y.\u00a0Nie'\n- 'G.\u00a0Ou'\n- 'J.L\u00a0Qu'\n- 'X.Q.\u00a0Ren'\n- 'N.\u00a0Sai'\n- 'L.M.\u00a0Song'\n- 'X.Y.\u00a0Song'\n- 'L.\u00a0Sun'\n- 'Y.\u00a0Tan'" +"---\nabstract: 'Making use of the equivalence between paraxial wave equation and two-dimensional Schr\u00f6dinger equation, Gaussian beams of monochromatic light, possessing knotted nodal structures are obtained in an analytical way. These beams belong to the wide class of paraxial beams called the Hypergeometric-Gaussian beams \\[E. Karimi, G. Zito, B. Piccirillo, L. Marrucci and E. Santamato, Opt. Lett. [**32**]{}, 3053(2007)\\]. Four topologies are dealt with: the unknot, the Hopf link, the Borromean rings and the trefoil. It is shown in the numerical way that neutral polarizable particles placed in such light fields, upon precise tuning of the initial conditions, can be forced to follow the identical knotted trajectories. A similar outcome is also valid for charged particles that are subject to a ponderomotive potential. This effect can serve to precisely steer particles along chosen complicated pathways exhibiting non-trivial topological character, guide them around obstacles and seems to be helpful in engineering more complex nanoparticles.'\nauthor:\n- Tomasz Rado\u017cycki\ntitle: Knotted trajectories of neutral and charged particles in Gaussian light beams\n---\n\nIntroduction {#int}\n============\n\nIn recent years, it has proven possible to investigate and generate beams of light with some complex structure far from the academic concept of plain waves. Theoretical" +"---\nabstract: 'Many scientific and economic applications involve the statistical learning of high-dimensional functional time series, where the number of functional variables is comparable to, or even greater than, the number of serially dependent functional observations. In this paper, we model observed functional time series, which are subject to errors in the sense that each functional datum arises as the sum of two uncorrelated components, one dynamic and one white noise. Motivated from the fact that the autocovariance function of observed functional time series automatically filters out the noise term, we propose a three-step framework by first performing autocovariance-based dimension reduction, then formulating a novel autocovariance-based block regularized minimum distance estimation to produce block sparse estimates, and based on which obtaining the final functional sparse estimates. We investigate theoretical properties of the proposed estimators, and illustrate the proposed estimation procedure with the corresponding convergence analysis via three sparse high-dimensional functional time series models. We demonstrate via both simulated and real datasets that our proposed estimators significantly outperform their competitors.'\nauthor:\n- Jinyuan Chang\n- Cheng Chen\n- Xinghao Qiao\n- Qiwei Yao\nbibliography:\n- 'paperbib.bib'\ntitle: |\n **An autocovariance-based learning framework for high-dimensional functional time series[^1]\\\n **\n---\n\n1\n\n[1]{}" +"---\nabstract: 'Neuromorphic computing describes the use of VLSI systems to mimic neuro-biological architectures and is also looked at as a promising alternative to the traditional von Neumann architecture. Any new computing architecture would need a system that can perform floating-point arithmetic. In this paper, we describe a neuromorphic system that performs IEEE 754-compliant floating-point multiplication. The complex process of multiplication is divided into smaller sub-tasks performed by components Exponent Adder, Bias Subtractor, Mantissa Multiplier and Sign OF/UF. We study the effect of the number of neurons per bit on accuracy and bit error rate, and estimate the optimal number of neurons needed for each component.'\nauthor:\n- Karn Dubey Urja Kothari Shrisha Rao\ntitle: |\n Floating-Point Multiplication\\\n Using Neuromorphic Computing\n---\n\n[**Keywords:**]{} IEEE 754, floating point arithmetic, neuromorphic computing, Neural Engineering Framework (NEF)\n\nIntroduction\n============\n\nNeuromorphic computing has recently become prominent as a possible future alternative to the traditional Von Neumann architecture\u00a0[@vonneuman] of computing. Some of the problems that are commonly faced when working with classical CMOS-based Von Neumann machines are the limitations on their energy efficiencies, and also the absolute limits to speed and scaling on account of physical limits\u00a0[@cmead; @koch1999]. Though Moore\u2019s Law held for" +"---\nabstract: 'We study the effects of adding the Coulomb interactions to the harmonic oscillator (HO) approximation of the heavy parton propagating through the quark-gluon plasma (the extension to QCD of the Molliere theory). We explicitly find the expression for the transverse momentum distribution of the gluon radiation of the heavy quark propagating in the quark gluon plasma in the framework of the Moliere theory, taking into account the BDMPSZ radiation in the harmonic oscillator (HO) approximation, and the Coulomb logarithms described by the additional logarithmic terms in the effective potential. We show that these Coulomb logarithms significantly influence the HO distribution, derived in the BDMPSZ works, especially for the small transverse momenta, filling the dead cone, and reducing the dead cone suppression of the heavy quark radiation (dead cone effect). In addition we study the effect of the phase space constraints on the heavy quark energy loss, and argue that taking into account of both the phase space constraints and of the Coulomb gluons reduces the dependence of the heavy quark energy loss on its mas in the HO approximation.'\nauthor:\n- |\n B.\u00a0Blok$^{1}$,\\\n $^1$ Department of Physics, Technion \u2013 Israel Institute of Technology, Haifa, Israel\\\ntitle: 'Heavy" +"---\nauthor:\n- Jeong Min Park\n- Yuan Cao\n- '$^{\\!\\!\\!\\!,\\ \\!*,\\ \\!\\dagger}$ Kenji Watanabe'\n- Takashi Taniguchi\n- 'Pablo Jarillo-Herrero'\nbibliography:\n- 'references.bib'\ntitle: 'Flavour Hund\u2019s Coupling, Correlated Chern Gaps, and Diffusivity in Moir\u00e9 Flat Bands'\n---\n\n[^1]\n\n**Interaction-driven spontaneous symmetry breaking lies at the heart of many quantum phases of matter. In moir\u00e9 systems, broken spin/valley \u2018flavour\u2019 symmetry in flat bands underlies the parent state out of which ultimately correlated and topological ground states emerge[@cao_correlated_2018; @cao_unconventional_2018; @chen_evidence_2019; @yankowitz_tuning_2019; @lu_superconductors_2019; @sharpe_emergent_2019; @serlin_intrinsic_2020; @chen_tunable_2020; @wong_cascade_2020; @zondiner_cascade_2020]. However, the microscopic mechanism of such flavour symmetry breaking and its connection to the low-temperature many-body phases remain to be understood. Here, we investigate the symmetry-broken many-body ground state of magic angle twisted bilayer graphene (MATBG) and its nontrivial topology using simultaneous thermodynamic and transport measurements. We directly observe flavour symmetry breaking as a pinning of the chemical potential $\\mu$ at all integer fillings of the moir\u00e9 superlattice, highlighting the importance of flavour Hund\u2019s coupling in the many-body ground state. The topological nature of the underlying flat bands is manifested upon breaking time-reversal symmetry, where we measure energy gaps corresponding to Chern insulator states with Chern numbers $C=3,2,1$ at filling factors $\\nu=1,2,3$, respectively, consistent" +"---\nabstract: 'We show that the moduli space of $U\\oplus \\langle -2k \\rangle$-polarized K3 surfaces is unirational for $k \\le 50$ and $k \\notin \\{11,35,42,48\\}$, and for other several values of $k$ up to $k=97$. Our proof is based on a systematic study of the projective models of elliptic K3 surfaces in ${{\\mathbb{P}}}^n$ for $3\\le n \\le 5$ containing either the union of two smooth rational curves or the union of a smooth rational curve and an elliptic curve intersecting at one point.'\naddress:\n- 'Institut f\u00fcr Algebraische Geometrie, Leibniz Universit\u00e4t Hannover, Welfengarten 1, 30167 Hannover, Germany.'\n- 'Universit\u00e4t des Saarlandes, Campus E2 4, D-66123 Saarbr\u00fccken, Germany'\n- 'Institut f\u00fcr Algebraische Geometrie, Leibniz Universit\u00e4t Hannover, Welfengarten 1, 30167 Hannover, Germany.'\n- |\n *Current address:* Mathematisches Institut\\\n Universit\u00e4t Bonn\\\n Endenicher Allee 60\\\n 53115 Bonn\\\n Germany\nauthor:\n- Mauro Fortuna\n- Michael Hoff\n- Giacomo Mezzedimi\nbibliography:\n- 'bibliography.bib'\ntitle: Unirational moduli spaces of some elliptic K3 surfaces\n---\n\nIntroduction\n============\n\nBy classical results and works of Mukai [@Muk88; @Mukg11; @Mukg13; @Mukg16; @Mukg1820], it is known that the moduli spaces of complex K3 surfaces of genus $g \\le 12$ and $g=13, 16, 18, 20$ are unirational. This was later improved by Farkas" +"---\nabstract: 'High-order plasma shaping (mainly elongation and shift, as opposed to low-order toroidicity) is shown, under certain conditions, to open gaps in the coupled shear-[Alfv\u00e9n]{} and acoustic continua at frequencies significantly above the values predicted by previous theories. Global eigenmodes in these gaps, which lie between those of geodesic acoustic modes (GAMs) and toroidicity-induced [Alfv\u00e9n]{} eigenmodes (TAEs), are found unstable to hot-ion populations typical of tokamak operation, whilst their fundamental resonances with circulating particles are shown to take place at velocities near the geometric mean of the [Alfv\u00e9n]{} and sound speeds. Therefore, such eigenmodes are expected to be observed near the predicted frequencies at operating tokamaks, playing a still unexplored role in magnetohydrodynamic spectroscopy as well as in the stability of next-step fusion experiments.'\nauthor:\n- Paulo Rodrigues\n- Francesca Cella\ntitle: ' High-order geodesic coupling of shear-[Alfv\u00e9n]{} and acoustic continua in tokamaks'\n---\n\nIntroduction\n============\n\nContinuous spectra of the magnetohydrodynamics (MHD) operator are central to a variety of phenomena dominated by inhomogeneous magnetic fields\u00a0[@uberoi.1972; @grad.1973; @goedbloed.1975], from astrophysical plasmas to fusion devices. Their origin lies on vanishing coefficients in the eigenvalue equation $$\\mathcal{F}({\\boldsymbol{\\mathrm{\\xi}}}) + \\mu_0 \\rho \\omega^2 {\\boldsymbol{\\mathrm{\\xi}}} = 0\n\\label{eq:eigenvalue.problem}$$ for small plasma displacements ${\\boldsymbol{\\mathrm{\\xi}}} e^{-i" +"---\nabstract: 'Mendelian randomization is a powerful tool for inferring the presence, or otherwise, of causal effects from observational data. However, the nature of genetic variants is such that pleiotropy remains a barrier to valid causal effect estimation. There are many options in the literature for pleiotropy robust methods when studying the effects of a single risk factor on an outcome. However, there are few pleiotropy robust methods in the multivariable setting, that is, when there are multiple risk factors of interest. In this paper we introduce three methods which build on common approaches in the univariable setting: MVMR-Robust; MVMR-Median; and MVMR-Lasso. We discuss the properties of each of these methods and examine their performance in comparison to existing approaches in a simulation study. MVMR-Robust is shown to outperform existing outlier robust approaches when there are low levels of pleiotropy. MVMR-Lasso provides the best estimation in terms of mean squared error for moderate to high levels of pleiotropy, and can provide valid inference in a three sample setting. MVMR-Median performs well in terms of estimation across all scenarios considered, and provides valid inference up to a moderate level of pleiotropy. We demonstrate the methods in an applied example looking at" +"---\nabstract: 'Breaking waves entrain gas beneath the surface. The wave-breaking process energizes turbulent fluctuations that break bubbles in quick succession to generate a wide range of bubble sizes. Understanding this generation mechanism paves the way towards the development of predictive models for large-scale maritime and climate simulations. @Garrett1 suggested that super-Hinze-scale turbulent break-up transfers entrained gas from large to small bubble sizes in the manner of a cascade. We provide a theoretical basis for this bubble-mass cascade by appealing to how energy is transferred from large to small scales in the energy cascade central to single-phase turbulence theories. A bubble break-up cascade requires that break-up events predominantly transfer bubble mass from a certain bubble size to a slightly smaller size on average. This property is called locality. In this paper, we analytically quantify locality by extending the population balance equation in conservative form to derive the bubble-mass transfer rate from large to small sizes. Using our proposed measures of locality, we show that scalings relevant to turbulent bubbly flows, including those postulated by @Garrett1 and observed in breaking-wave experiments and simulations, are consistent with a strongly local transfer rate, where the influence of non-local contributions decays in a power-law" +"---\nauthor:\n- 'Tobias Rapp[^1]'\n- 'Carsten Dachsbacher[^2]'\nbibliography:\n- '../References.bib'\ntitle: Uncertain Transport in Unsteady Flows\n---\n\nAlthough most experiments and simulations produce deterministic data, uncertainty exists in all measured or simulated flows. This uncertainty might be estimated from repeated simulation runs or measurements, it might be introduced by data processing and reduction, or it can be explicitly modeled. Studying uncertainty is especially relevant in unsteady flows, where small variations in the initial conditions can cause dramatic changes to the flow. In this paper, we investigate uncertainties in the Lagrangian transport, i.e.\u00a0the advection of a material by the flow.\n\nFor deterministic flows, the Lagrangian coherent structures (LCS) identify a topological skeleton of the flow dynamics in a finite-time interval. Recent work has extended the definitions of coherent structures to uncertain flows. The probabilistic\u00a0[@Guo2016] or averaged\u00a0[@Schneider2012] transport is estimated using a Monte Carlo approach, i.e.\u00a0by advecting a large amount of particles. While the LCS are theoretically well established, this is, to our knowledge, not the case for its probabilistic extension.\n\nBased on recent work from Haller, Karrasch, and Kogelbauer\u00a0[@Karrasch2018; @Karrasch2020], we employ the diffusion barrier strength (DBS) to identify transport barriers and enhancers to stochastic" +"---\nabstract: |\n We consider a continuous-time game-theoretic model of an investment market with short-lived assets and endogenous asset prices. The first goal of the paper is to formulate a stochastic equation which determines wealth processes of investors and to provide conditions for the existence of its solution. The second goal is to show that there exists a strategy such that the logarithm of the relative wealth of an investor who uses it is a submartingale regardless of the strategies of the other investors, and the relative wealth of any other essentially different strategy vanishes asymptotically. This strategy can be considered as an optimal growth portfolio in the model.\n\n *Keywords:* asset market game, relative growth optimal strategy, martingale convergence, evolutionary finance.\n\n *MSC 2010:* 91A25, 91B55. *JEL Classification:* C73, G11.\nauthor:\n- 'Mikhail Zhitlukhin[^1]'\nbibliography:\n- 'continuous-time-game.bib'\ndate: 30 August 2020\ntitle: 'A continuous-time asset market game with short-lived assets'\n---\n\nIntroduction\n============\n\nThis paper proposes a dynamic game-theoretic model of an investment market \u2013 an *asset market game* \u2013 and study strategies that allow an investor to achieve faster growth of wealth compared to rival market participants. The model provides an outlook on growth optimal portfolios different from the well-known" +"---\nabstract: 'In order to develop systems capable of modeling artificial life, we need to identify, which systems can produce complex behavior. We present a novel classification method applicable to any class of deterministic discrete space and time dynamical systems. The method distinguishes between different asymptotic behaviors of a system\u2019s average computation time before entering a loop. When applied to elementary cellular automata, we obtain classification results, which correlate very well with Wolfram\u2019s manual classification. Further, we use it to classify 2D cellular automata to show that our technique can easily be applied to more complex models of computation. We believe this classification method can help to develop systems, in which complex structures emerge.'\nauthor:\n- 'Barbora Hudcova$^{1, 2}$'\n- |\n Tomas Mikolov$^{2}$\\\n \\\n $^1$Charles University, Prague\\\n $^2$Czech Institute of Informatics, Robotics and Cybernetics, CTU, Prague\nbibliography:\n- 'example.bib'\ntitle: Classification of Complex Systems Based on Transients\n---\n\nIntroduction\n============\n\nThere are many approaches to searching for systems capable of open-ended evolution. One option is to carefully design a model and observe its dynamics. Iconic examples were designed by [@tierra], [@avida], or @chromaria. However, as we lack any formal definition of open-endedness or complexity, there is no formal method of" +"---\nabstract: 'We investigate the critical behavior of the two-dimensional spin-$1$ Baxter-Wu model in a crystal field using entropic sampling simulations with the joint density of states. We obtain the temperature-crystal field phase diagram, which includes a tetracritical line ending at a pentacritical point. A finite-size scaling analysis of the maximum of the specific heat, while changing the crystal field anisotropy, is used to obtain a precise location of the pentacritical point. Our results give the critical temperature and crystal field as $T_{pc}=0.98030(10)$ and $D_{pc}=1.68288(62)$. We also detect that at the first-order region of the phase diagram, the specific heat exhibits a double peak structure as in the Schottky-like anomaly, which is associated with an order-disorder transition.'\nauthor:\n- 'L. N. Jorge'\n- 'P. H. L. Martins'\n- 'Claudio J. DaSilva'\n- 'L. S. Ferreira'\n- 'A. A. Caparica'\nbibliography:\n- 'referencias.bib'\ntitle: 'An entropic simulational study of the spin-$1$ Baxter-Wu model in a crystal field'\n---\n\n\\[sec:level1\\]Introduction {#sec:introduction}\n==========================\n\nThe spin-$1$ Baxter-Wu (BW) model in a crystal field[@Kinzel1981; @Costa2004; @Dias2017] is a generalization of the original spin$-\\frac{1}{2}$ BW model[@Wood1972; @Baxter1973; @baxter1974ising; @baxter1974ising2], which includes a crystal field anisotropic term $D$, in addition to the three-spin interaction. The Hamiltonian of" +"---\nabstract: |\n [ Diagnostic testing is germane to a variety of scenarios in medicine, pandemic tracking, threat detection, and signal processing. This is an expository paper with some original results. Here we first set up a mathematical architecture for diagnostics, and explore its probabilistic underpinnings. Doing so enables us to develop new metrics for assessing the efficacy of different kinds of diagnostic tests, and for solving a long standing open problem in diagnostics, namely, comparing tests when their receiver operating characteristic curves cross. The first is done by introducing the notion of what we call, a ***Gini Coefiicient***; the second by invoking the information theoretic notion of ***dinegentropy***. Taken together, these may be seen a contribution to the state of the art of diagnostics.\\\n The spirit of our work could also be relevant to the much discussed topic of ***batch testing***, where each batch is defined by the partitioning strategy used to create it. However this possibility has not been explored here in any detail. Rather, we invite the attention of other researchers to investigate this idea, as future work.\\\n \\[10pt\\] ]{}\n\n [**Keywords:** *Area Under the Curve, Receiver Operating Characteristic Curve, Kullback-Liebler Distance, Gini Coefficient*.]{}\n---\n\n**THE DINEGENTROPY OF" +"---\nabstract: 'We study the R\u00e9nyi holographic dark energy (RHDE) model by using the future and the particle horizons as the infrared (IR) cut-off. With the initial condition from the literature, most of the cosmological parameters are computed. Some of the results agree with the observation that the present universe is in accelerating expansion and in a phantom phase.'\nauthor:\n- 'Suphakorn Chunlen[^1]'\n- 'Phongsaphat Rangdee[^2]'\ntitle: 'Exploring the R\u00e9nyi Holographic Dark Energy Model with the Future and the Particle Horizons as the Infrared Cut-off'\n---\n\nIntroduction\n============\n\nIt is widely known that the universe is expanding with acceleration [@Riess:1998cb; @Hinshaw:2012aka; @Aghanim:2018eyx]. Many theoretical models have been constructed to explain this behaviour. One of them are the dark energy model. There are various types of the dark energy model. The most common and acceptable one is the Lambda Cold Dark Matter ($\\Lambda$CDM) model. $\\Lambda$CDM has given consistent results with the observation, but it suffers from the cosmological constant problem [@Peebles:2002gy; @Padmanabhan:2002ji; @Copeland:2006wr; @Frieman:2008sn; @Bamba:2012cp; @Wang:2016och]. A lot of new dark energy models have been established to solve this issue. One of those is the holographic dark energy (HDE) model proposed in [@Li:2004rb]. Inspired by the holographic principle [@tHooft:1993dmi], the HDE" +"---\nabstract: 'We construct a lattice model of topological order (Kagome quantum spin liquids) and solve it with unbiased quantum Monte Carlo simulations. A three-stage anyon condensation with two transitions from a $\\mathbb Z_2\\boxtimes\\mathbb Z_2$ topological order to a $\\mathbb Z_2$ topological order and eventually to a trivial symmetric phase is revealed. These results provide concrete examples of phase transitions between topological orders in quantum magnets. The designed quantum spin liquid model and its numerical solution offer a playground for further investigations on vestigial anyon condensation.'\nauthor:\n- 'Yan-Cheng Wang'\n- Zheng Yan\n- Chenjie Wang\n- Yang Qi\n- Zi Yang Meng\nbibliography:\n- 'bilayer-bfg.bib'\ntitle: Vestigial anyon condensation in kagome quantum spin liquids\n---\n\nIntroduction\n============\n\nQuantum spin liquids (QSLs)\u00a0[@YiZhou2017; @Broholm2020] are the embodiment of topological orders and offer the ideal platform for systematical investigations of fractional anyonic excitations and statistics therein\u00a0[@Wen2017; @Wen2019]. While experimental progress on QSL and topological orders is difficult and often hampered by the complexity of materials and limitation of probing techniques, such as how to remove the impurity scattering of kagome antiferromagnets herbertsmithite and Zn-doped barlowite\u00a0[@HanTH12; @FengZL17; @WeiYuan2017; @ZLFeng2019; @JJWen2019; @YuanWei2020; @YuanWei2020nano], theoretical progress on both topological orders and quantum" +"---\nabstract: 'We utilize classical facts from topology to show that the classification problem in machine learning is always solvable under very mild conditions. Furthermore, we show that a softmax classification network acts on an input topological space by a finite sequence of topological moves to achieve the classification task. Moreover, given a training dataset, we show how topological formalism can be used to suggest the appropriate architectural choices for neural networks designed to be trained as classifiers on the data. Finally, we show how the architecture of a neural network cannot be chosen independently from the shape of the underlying data. To demonstrate these results, we provide example datasets and show how they are acted upon by neural nets from this topological perspective.'\naddress: Santa Clara University\nauthor:\n- Mustafa Hajij\n- Kyle Istvan\nbibliography:\n- 'refs\\_2.bib'\ntitle: A TOPOLOGICAL FRAMEWORK FOR DEEP LEARNING\n---\n\nIntroduction\n============\n\nThe purpose of this article is to give a high level description of the role topology plays when considering the action of a classification neural network on the domains of its target data\u2019s components. This work is driven by the observation that a neural network is essentially a composition of continuous functions." +"---\nabstract: 'The *Shapes Constraint Language (SHACL)* is a recent W3C recommendation language for validating RDF data. Specifically, SHACL documents are collections of constraints that enforce particular shapes on an RDF graph. Previous work on the topic has provided theoretical and practical results for the validation problem, but did not consider the standard decision problems of *satisfiability* and *containment*, which are crucial for verifying the feasibility of the constraints and important for design and optimization purposes. In this paper, we undertake a thorough study of the different features of SHACL by providing a translation to a new first-order language, called [$\\texttt{\\textbf{SCL}}$]{}, that precisely captures the semantics of SHACL w.r.t.\u00a0satisfiability and containment. We study the interaction of SHACL features in this logic and provide the detailed map of decidability and complexity results of the aforementioned decision problems for different SHACL sublanguages. Notably, we prove that both problems are undecidable for the full language, but we present decidable combinations of interesting features.'\nauthor:\n- Paolo Pareti\n- George Konstantinidis\n- Fabio Mogavero\n- |\n \\\n Timothy J.\u00a0Norman\nbibliography:\n- 'litbib.bib'\ntitle: |\n SHACL Satisfiability and Containment\\\n (Extended Paper)\n---\n\nIntroduction\n============\n\nThe Shapes Constraint Language (SHACL) has been recently introduced" +"---\nabstract: 'Since the epoch of cosmic star formation peak at $z \\sim 2$, most of it is obscured in high mass galaxies, while in low mass galaxies the radiation escapes unobstructed. During the reionization epoch, the presence of evolved, dust obscured galaxies are a challenge to galaxy formation and evolution models. By means of a chemodynamical evolution model, we investigate the star formation and dust production required to build up the bulk of dust in galaxies with initial baryonic mass ranging from $7.5 \\times 10^{7}$\u00a0M$_\\odot$ to $2.0 \\times 10^{12}$\u00a0M$_\\odot$. The star formation efficiency was also chosen to represent the star formation rate from irregular dwarf to giant elliptical galaxies. We adopted a dust coagulation efficiency from [@dwek1998evolution Case A] as well as a lower efficiency one (Case B), about five times smaller than Case A. All possible combination of these parameters was computed, summing forty different scenarios. We find that in high stellar formation systems the dust accretion in ISM rules over stellar production before the star formation peak, making these systems almost insensible to dust coagulation efficiency. In low star formation systems, the difference between Case A and B lasts longer, mainly in small galaxies. Thus," +"---\nabstract: 'Face super-resolution (SR) has become an indispensable function in security solutions such as video surveillance and identification system, but the distortion in facial components is a great challenge in it. Most state-of-the-art methods have utilized facial priors with deep neural networks. These methods require extra labels, longer training time, and larger computation memory. In this paper, we propose a novel Edge and Identity Preserving Network for Face SR Network, named as EIPNet, to minimize the distortion by utilizing a lightweight edge block and identity information. We present an edge block to extract perceptual edge information, and concatenate it to the original feature maps in multiple scales. This structure progressively provides edge information in reconstruction to aggregate local and global structural information. Moreover, we define an identity loss function to preserve identification of SR images. The identity loss function compares feature distributions between SR images and their ground truth to recover identities in SR images. In addition, we provide a luminance-chrominance error (LCE) to separately infer brightness and color information in SR images. The LCE method not only reduces the dependency of color information by dividing brightness and color components but also enables our network to reflect differences between" +"---\nabstract: '[Spectral observations of the type-IIb supernova (SN) 2016gkg [at 300-800 days]{} are reported. The [spectra show]{} nebular characteristics, revealing emission from the progenitor star\u2019s metal-rich core and providing clues to the kinematics and physical conditions of the explosion. The nebular spectra are dominated by emission lines of \\[O\u00a0I\\]\u00a0$\\lambda\\lambda6300, 6364$ and \\[Ca\u00a0II\\]\u00a0$\\lambda\\lambda7292, 7324$. Other notable, albeit weaker, emission lines include Mg\u00a0I\\]\u00a0$\\lambda4571$, \\[Fe\u00a0II\\]\u00a0$\\lambda7155$, O\u00a0I\u00a0$\\lambda7774$, Ca II triplet, and a broad, boxy feature at the location of H$\\alpha$. Unlike in other stripped-envelope SNe, the \\[O\u00a0I\\] doublet is clearly resolved due to the presence of strong narrow components. The doublet shows an unprecedented emission line profile consisting of at least three components for each \\[O\u00a0I\\]$\\lambda6300, 6364$ line: a broad component (width $\\sim2000$ km\u00a0s$^{-1}$), and a pair of narrow blue and red components (width $\\sim300$ km\u00a0s$^{-1}$) mirrored against the rest velocity. The narrow component appears also in other lines, and is conspicuous in \\[O\u00a0I\\]. This indicates the presence of multiple distinct kinematic components of material at low and high velocities. The low-velocity components are likely to be produced by a dense, slow-moving emitting region near the center, while" +"---\nabstract: 'Unmanned aircraft systems (UAS), or unmanned aerial vehicles (UAVs), often referred to as drones, have been experiencing healthy growth in the United States and around the world. The positive uses of UAS have the potential to save lives, increase safety and efficiency, and enable more effective science and engineering research. However, UAS are subject to threats stemming from increasing reliance on computer and communication technologies, which place public safety, national security, and individual privacy at risk. To promote safe, secure and privacy-respecting UAS operations, there is an urgent need for innovative technologies for detecting, tracking, identifying and mitigating UAS. A Counter-UAS (C-UAS) system is defined as a system or device capable of lawfully and safely disabling, disrupting, or seizing control of an unmanned aircraft or unmanned aircraft system. Over the past 5 years, significant research efforts have been made to detect, and mitigate UAS: detection technologies are based on acoustic, vision, passive radio frequency, radar, and data fusion; and mitigation technologies include physical capture or jamming. In this paper, we provide a comprehensive survey of existing literature in the area of C-UAS, identify the challenges in countering unauthorized or unsafe UAS, and evaluate the trends of detection and" +"---\nauthor:\n- Andrea Giudici\n- 'John S. Biggins'\nbibliography:\n- 'references.bib'\ntitle: 'Giant deformations and soft-inflation in LCE balloons'\n---\n\nIntroduction\n============\n\nCylindrical balloons, commonly encountered at parties, have $N$ shaped pressure-volume curves, and the negative gradient generates classic ballooning instabilities during inflation [@mallock1891ii]. Under pressure control, the balloon jumps in volume at the pressure maximum, to a substantially larger (ballooned) state. Under volume control, the cylinder instead phase-separates into ballooned and un-ballooned portions [@ChaterHutchinson; @gent2005elastic; @meng2014]. Here, we show this instability can be controlled, enriched and amplified in balloons made from liquid crystal elastomers (LCEs).\n\nLCEs [@warner2007liquid] are rubbery networks of rod-shaped mesogens. Like conventional liquid crystals[@de1993physics], the rods adopt an isotropic orientation distribution when hot, but align below a critical temperature to form a nematic phase. In elastomers, alignment causes a dramatic reversible elongation along the (unit) director $\\n$, (Fig.\u00a0\\[fig.intro1\\] (a)i), making LCEs soft actuators [@kupfer1991nematic; @de1997artificial]. LCE bubbles/balloons have been fabricated [@LCEBalloons] but their instabilities remain unexplored. We show that LCE thermal actuation can trigger the ballooning instability (Fig.\u00a0\\[fig.intro1\\](a)ii), transforming LCEs into sub-critical actuators with greatly amplified strain.\n\n![Top: LCEs elongate on cooling from isotropic to nematic. In an LCE balloon cooled at constant" +"---\nabstract: 'In recent years the chain fountain became prominent for its counter-intuitive fascinating physical behavior. Most widely known is the experiment in which a long chain leaves an elevated beaker like a fountain and falls to the ground under the influence of gravity. The observed chain fountain was precisely described and predicted by an inverted catenary in several publications. The underlying assumptions are a stationary fountain and the knowledge of the boundary conditions, the ground and beaker reaction forces. In contrast to determining the steady-state chain fountain shape, it turns out that the main difficulty lies in predicting the reaction forces. A consistent and complete physical explanation model is currently not available. In order to give a reasonable explanation for the reaction forces an illustrative mechanical system for generating steady-state chain fountain is proposed in this work. The model allows to generate all physical possible chain fountains by adjusting a pulley arrangement. The simplifications incorporated make the phenomenon accessible to undergraduate students.'\nauthor:\n- |\n Johannes Mayet\\\n Chair of Applied Mechanics\\\n Department of Mechanical Engineering\\\n Technical University of Munich\\\n `johannesmayet@tum.de`\\\n Friedrich Pfeiffer[^1]\\\n Chair of Applied Mechanics\\\n Department of Mechanical Engineering\\\n Technical University of Munich\\\n `pfeiffer@tum.de`\\\nbibliography:\n- 'ms.bib'\ntitle:" +"---\nabstract: 'Surgical instrument segmentation is a key component in developing context-aware operating rooms. Existing works on this task heavily rely on the supervision of a large amount of labeled data, which involve laborious and expensive human efforts. In contrast, a more affordable unsupervised approach is developed in this paper. To train our model, we first generate anchors as pseudo labels for instruments and background tissues respectively by fusing coarse handcrafted cues. Then a semantic diffusion loss is proposed to resolve the ambiguity in the generated anchors via the feature correlation between adjacent video frames. In the experiments on the binary instrument segmentation task of the 2017 MICCAI EndoVis Robotic Instrument Segmentation Challenge dataset, the proposed method achieves 0.71 IoU and 0.81 Dice score without using a single manual annotation, which is promising to show the potential of unsupervised learning for surgical tool segmentation.'\nauthor:\n- 'Daochang Liu[^1]'\n- 'Yuhui Wei[ ^fnsymbol[1]{}^]{}'\n- Tingting Jiang\n- Yizhou Wang\n- Rulin Miao\n- Fei Shan\n- Ziyu Li\nbibliography:\n- 'mybib.bib'\ntitle: Unsupervised Surgical Instrument Segmentation via Anchor Generation and Semantic Diffusion\n---\n\nIntroduction\n============\n\nInstrument segmentation in minimally invasive surgery is fundamental for various advanced computer-aided intervention techniques such as" +"---\nabstract: 'In this work we demonstrate that what was previously considered as different mechanisms of baryon asymmetry generation involving two right-handed Majorana neutrinos with masses far below the GUT scale\u2014 leptogenesis via neutrino oscillations and resonant leptogenesis\u2014are actually united. We show that the observed baryon asymmetry can be generated for all experimentally allowed values of the right-handed neutrino masses above $M_N \\gtrsim 100$ MeV. Leptogenesis is effective in a broad range of the parameters, including mass splitting between two right-handed neutrinos as big as $\\Delta M_N/M_N \\sim 0.1$, as well as mixing angles between the heavy and light neutrinos large enough to be accessible to planned intensity experiments or future colliders.'\nauthor:\n- Juraj Klari\u0107\n- Mikhail Shaposhnikov\n- Inar Timiryasov\nbibliography:\n- 'lepto\\_refs.bib'\ntitle: 'Uniting low-scale leptogeneses'\n---\n\n#### Introduction. {#par:introduction}\n\nFlavor oscillations of neutrinos is the only laboratory tested phenomenon pointing on the incompleteness of the Standard Model (SM). The presence of the ordinary baryonic matter in the observed amounts cannot be explained within the SM as well (see, e.g. review\u00a0[@Canetti:2012zc]). The minimal renormalisable extension of the SM contains two or more gauge singlet right-handed neutrinos which allow for a Dirac mass matrix $m_D$ for the" +"---\nabstract: 'Printed Circuit boards (PCBs) are one of the most important stages in making electronic products. A small defect in PCBs can cause significant flaws in the final product. Hence, detecting all defects in PCBs and locating them is essential. In this paper, we propose an approach based on denoising convolutional autoencoders for detecting defective PCBs and to locate the defects. Denoising autoencoders take a corrupted image and try to recover the intact image. We trained our model with defective PCBs and forced it to repair the defective parts. Our model not only detects all kinds of defects and locates them, but it can also repair them as well. By subtracting the repaired output from the input, the defective parts are located. The experimental results indicate that our model detects the defective PCBs with high accuracy (97.5%) compare to state of the art works.'\nauthor:\n- \n- \n- \n- \n- \ntitle: 'PCB Defect Detection Using Denoising Convolutional Autoencoders\\'\n---\n\nPCB, defect detection, autoencoder, denoising convolutional autoencoders\n\nIntroduction\n============\n\nPrinted Circuit Board (PCB) is a collection of electronic boards that helps different electronic components connect to each other. It is used in every electronic product and with its help, the" +"---\nabstract: 'We give conditions on the strain-energy function of nonlinear anisotropic hyperelastic materials that ensure compatibility with the classical linear theories of anisotropic elasticity. We uncover the limitations associated with the volumetric deviatoric separation of the strain energy used, for example, in many Finite Element (FE) codes in that it does not fully represent the behavior of anisotropic materials in the linear regime. This limitation has important consequences. We show that, in the small deformation regime, a FE code based on the volumetric-deviatoric separation assumption predicts that a sphere made of a compressible anisotropic material deforms into another sphere under hydrostatic pressure loading, instead of the expected ellipsoid. For finite deformations, the commonly adopted assumption that fibres cannot support compression is incorrectly implemented in current FE codes and leads to the unphysical result that under hydrostatic tension a sphere of compressible anisotropic material deforms into a larger sphere.'\nauthor:\n- |\n Luigi Vergori$^1$, Michel Destrade$^{1,2}$,\\\n Patrick McGarry$^3$, Ray W. Ogden$^4$\\\n $^1$School of Mathematics, Statistics & Applied Mathematics\\\n National University of Ireland Galway, Ireland,\\\n $^2$School of Mechanical & Materials Engineering,\\\n University College Dublin, Belfield, Dublin 4, Ireland,\\\n $^3$Mechanical & Biomedical Engineering,\\\n National University of Ireland Galway, Ireland,\\\n $^4$School of Mathematics" +"---\nabstract: 'We tackle the problem of predicting the number of optimization steps that a pre-trained deep network needs to converge to a given value of the loss function. To do so, we leverage the fact that the training dynamics of a deep network during fine-tuning are well approximated by those of a linearized model. This allows us to approximate the training loss and accuracy at any point during training by solving a low-dimensional Stochastic Differential Equation (SDE) in function space. Using this result, we are able to predict the time it takes for Stochastic Gradient Descent (SGD) to fine-tune a model to a given loss without having to perform any training. In our experiments, we are able to predict training time of a ResNet within a 20% error margin on a variety of datasets and hyper-parameters, at a 30 to 45-fold reduction in cost compared to actual training. We also discuss how to further reduce the computational and memory cost of our method, and in particular we show that by exploiting the spectral properties of the gradients\u2019 matrix it is possible predict training time on a large dataset while processing only a subset of the samples.'\nauthor:\n- |" +"---\nabstract: 'We analyze a readout scheme for Majorana qubits based on dispersive coupling to a resonator. We consider two variants of Majorana qubits: the Majorana transmon and the Majorana box qubit. In both cases, the qubit-resonator interaction can produce sizeable dispersive shifts in the MHz range for reasonable system parameters, allowing for submicrosecond readout with high fidelity. For Majorana transmons, the light-matter interaction used for readout manifestly conserves Majorana parity, which leads to a notion of quantum nondemolition (QND) readout that is stronger than for conventional charge qubits. In contrast, Majorana box qubits only recover an approximately QND readout mechanism in the dispersive limit where the resonator detuning is large. We also compare dispersive readout to longitudinal readout for the Majorana box qubit. We show that the latter gives faster and higher fidelity readout for reasonable parameters, while having the additional advantage of being manifestly QND, and so may prove to be a better readout mechanism for these systems.'\nauthor:\n- 'Thomas\u00a0B.\u00a0Smith'\n- 'Maja\u00a0C.\u00a0Cassidy'\n- 'David\u00a0J.\u00a0Reilly'\n- 'Stephen\u00a0D.\u00a0Bartlett'\n- 'Arne\u00a0L.\u00a0Grimsmo'\ntitle: Dispersive readout of Majorana qubits\n---\n\nIntroduction {#section:introduction}\n============\n\n![image](standalone.pdf)\n\nTopological phases of matter offer a promising platform" +"---\nabstract: 'Automatic pronunciation error detection (APED) plays an important role in the domain of language learning. As for the previous ASR-based APED methods, the decoded results need to be aligned with the target text so that the errors can be found out. However, since the decoding process and the alignment process are independent, the prior knowledge about the target text is not fully utilized. In this paper, we propose to use the target text as an extra condition for the Transformer backbone to handle the APED task. The proposed method can output the error states with consideration of the relationship between the input speech and the target text in a fully end-to-end fashion. Meanwhile, as the prior target text is used as a condition for the decoder input, the Transformer works in a feed-forward manner instead of autoregressive in the inference stage, which can significantly boost the speed in the actual deployment. We set the ASR-based Transformer as the baseline APED model and conduct several experiments on the L2-Arctic dataset. The results demonstrate that our approach can obtain 8.4% relative improvement on the $F_1$ score metric.'\naddress: ' Department of Information and Electronic Engineering, Zhejiang University, China'\nauthor:\n-" +"---\nabstract: 'Within the scope of a spherically symmetric space-time we study the role of different types of matter in the formation of different configurations with spherical symmetries. Here we have considered matter with barotropic equation of state, scalar field, electromagnetic field and an interacting system of scalar and electromagnetic field as the source. Corresponding field equations are solved exploiting harmonic coordinates. An easy to handle method is proposed which allows one to have an idea about the possible behavior of the metric functions once the components of the EMT of the source field is known.'\nauthor:\n- Bijan Saha\ntitle: 'Static spherically symmetric space-time: some remarks'\n---\n\n1 cm\n\nIntroduction\n============\n\nIn order to describe simple isolated bodies and island-like configurations spherical symmetry is a natural choice [@BronBook]. Spherically symmetric space-times are invariant under spatial rotation. Metric functions in this case, generally, depend on the radial coordinate and the time coordinate. In case of a static space-time, metric functions do not depend on time.\n\nStatic spherically symmetric space-time is widely used in physics to obtain analytic and numerical solutions to the Einstein field equations in presence of different types of source fields. One of the most celebrated static spherically" +"---\nabstract: 'We theoretically investigate the generation of microscopic atomic NOON states, corresponding to the coherent ${\\ensuremath{|N,0\\rangle}\\xspace} + {\\ensuremath{|0,N\\rangle}\\xspace}$ superposition with $N\\sim 5$ particles, via collective tunneling of interacting ultracold bosonic atoms within a symmetric double-well potential in the self-trapping regime. We show that a periodic driving of the double well with suitably tuned amplitude and frequency parameters allows one to substantially boost this tunneling process without altering its collective character. The time scale to generate the NOON superposition, which corresponds to half the tunneling time and would be prohibitively large in the undriven double well for the considered atomic populations, can thereby be drastically reduced, which renders the realization of NOON states through this protocol experimentally feasible. Resonance- and chaos-assisted tunneling are identified as key mechanisms in this context. A quantitative semiclassical evaluation of their impact onto the collective tunneling process allows one to determine the optimal choice for the driving parameters in order to generate those NOON states as fast as possible.'\nauthor:\n- 'G. Vanhaele'\n- 'P. Schlagheck'\nbibliography:\n- 'biblio.bib'\ntitle: 'NOON states with ultracold bosonic atoms via resonance- and chaos-assisted tunneling'\n---\n\nIntroduction\n============\n\nNOON states have attracted considerable attention in the quantum physics community" +"---\nauthor:\n- 'Benjamin Lee, Dave Brown, Bongshin Lee, Christophe Hurter, Steven Drucker, and Tim Dwyer'\nbibliography:\n- 'bibliography.bib'\ntitle: 'Data Visceralization: Enabling Deeper Understanding of Data Using Virtual Reality'\n---\n\nCommunicating information using stories that employ data visualization has been explored extensively in recent years [@Riche:2018:DDS; @Segel:2010:NVT]. A fundamental part of data visualization is processing and transforming raw data, ultimately mapping this abstracted information into attributes represented in a visualization [@Card:1999:RIV; @Chi:1998:OIF]. This abstraction, while powerful and in many cases necessary, poses a limitation for data based on physical properties, where the process of measurement causes the connection between the visualization and the underlying \u2018meaning\u2019 of the data to be lost (i.e., what the data truly represents in the real-world). While techniques in data-driven storytelling (e.g.,\u00a0[@stolper2018data]) can help establish context and resolve ambiguity in these cases, these techniques do little to help people truly *understand* the underlying data itself. A common approach used to help improve comprehension of these measures is by using concrete scales [@Chevalier:2013:UCS]\u2014the association of physical measurements and quantities with more familiar objects. However, this often relies on prior knowledge and requires cognitive effort to effectively envision the desired mental imagery.\n\nTo complement these approaches" +"---\nabstract: 'In this technical report, we present the top-performing LiDAR-only solutions for 3D detection, 3D tracking and domain adaptation three tracks in Waymo Open Dataset Challenges 2020. Our solutions for the competition are built upon our recent proposed PV-RCNN 3D object detection framework. Several variants of our PV-RCNN are explored, including temporal information incorporation, dynamic voxelization, adaptive training sample selection, classification with RoI features, etc. A simple model ensemble strategy with non-maximum-suppression and box voting is adopted to generate the final results. By using only LiDAR point cloud data, our models finally achieve the 1st place among all LiDAR-only methods, and the 2nd place among all multi-modal methods, on the 3D Detection, 3D Tracking and Domain Adaptation three tracks of Waymo Open Dataset Challenges. Our solutions will be available at .'\nauthor:\n- |\n Shaoshuai Shi$^{1}$ Chaoxu Guo$^{1}$ Jihan Yang$^{2}$ Hongsheng Li$^{1}$\\\n $^1$Multimedia Laboratory, The Chinese University of Hong Kong\\\n $^2$The University of Hong Kong\\\n [shaoshuaics@gmail.com gus\\_guo@outlook.com jihanyang13@gmail.com hsli@ee.cuhk.edu.hk]{}\nbibliography:\n- 'egbib.bib'\ntitle: 'PV-RCNN: The Top-Performing LiDAR-only Solutions for 3D Detection / 3D Tracking / Domain Adaptation of Waymo Open Dataset Challenges'\n---\n\nIntroduction\n============\n\nThe Waymo Open Dataset Challenges at CVPR\u201920 are the highly competitive competition with the" +"---\nabstract: 'Testing procedures for assessing a parametric [regression model with circular response and $\\mathbb{R}^d$-valued]{} [covariate]{} are proposed and analyzed in this work[ both for]{} independent and for spatially correlated data. The [test]{} statistics are based on a circular distance comparing a ([non-smoothed]{} or smoothed) parametric [circular estimator]{} and a nonparametric [one]{}. [Properly designed bootstrap procedures for calibrating the [tests]{} in practice are also presented]{}. Finite sample performance of the [tests]{}[ in different scenarios with independent and spatially correlated samples,]{} is analyzed [by simulations]{}.'\nauthor:\n- |\n Andrea Meil\u00e1n-Vila\\\n Universidade da Coru\u00f1a[^1]\n- |\n Mario Francisco-Fern\u00e1ndez\\\n Universidade da Coru\u00f1a\n- |\n Rosa M. Crujeiras\\\n Universidade de Santiago de Compostela[^2]\nbibliography:\n- 'bibibnew.bib'\ntitle: 'Goodness-of-fit tests for parametric regression models with circular response'\n---\n\n*Keywords:* [ Model checking, Circular data, Local polynomial regression, Spatial correlation, Bootstrap]{}\n\nIntroduction {#sec:gof_circular_int}\n============\n\nIn many scientific fields, such as oceanography, meteorology or biology, data are angular measurements (points in the unit circle [of a circular variable]{}), which are accompanied by [auxiliary]{} observations of other Euclidean random variables. The joint behavior of these circular and Euclidean variables can be analyzed by considering a regression model, allowing at the same time to explain the possible relation between" +"---\nabstract: 'Saliency computation models aim to imitate the attention mechanism in the human visual system. The application of deep neural networks for saliency prediction has led to a drastic improvement over the last few years. However, deep models have a high number of parameters which makes them less suitable for real-time applications. Here we propose a compact yet fast model for real-time saliency prediction. Our proposed model consists of a modified U-net architecture, a novel fully connected layer, and central difference convolutional layers. The modified U-Net architecture promotes compactness and efficiency. The novel fully-connected layer facilitates the implicit capturing of the location-dependent information. Using the central difference convolutional layers at different scales enables capturing more robust and biologically motivated features. We compare our model with state of the art saliency models using traditional saliency scores as well as our newly devised scheme. Experimental results over four challenging saliency benchmark datasets demonstrate the effectiveness of our approach in striking a balance between accuracy and speed. Our model can be run in real-time which makes it appealing for edge devices and video processing.'\nauthor:\n- |\n Samad Zabihi\\\n School of Electrical and Computer Engineering\\\n Shiraz University\\\n Shiraz, Iran\\\n `s.zabihi@shirazu.ac.ir`\\\n Hamed Rezazadegan" +"---\nabstract: 'In this work, we use the [astraeus]{} (seminumerical rAdiative tranSfer coupling of galaxy formaTion and Reionization in N-body dArk mattEr simUlationS) framework which couples galaxy formation and reionization in the first billion years. Exploring a number of models for reionization feedback and the escape fraction of ionizing radiation from the galactic environment ($f_\\mathrm{esc}$), we quantify how the contribution of star-forming galaxies [(with halo masses $M_h>10^{8.2}\\msun$)]{} to reionization depends on the radiative feedback model, $f_\\mathrm{esc}$, and the environmental over-density. Our key findings are: (i) for constant $f_\\mathrm{esc}$ models, intermediate-mass galaxies (with halo masses of $M_h\\simeq10^{9-11}\\msun$ and absolute UV magnitudes of $M_{UV} \\sim -15$ to $-20$) in intermediate-density regions (with over-density $\\log_{10}(1+\\delta) \\sim 0-0.8$ on a $2$\u00a0comoving Mpc spatial scale) drive reionization; (ii) scenarios where $f_\\mathrm{esc}$ increases with decreasing halo mass shift the galaxy population driving reionization to lower-mass galaxies ($M_h\\lesssim10^{9.5}\\msun$) with lower luminosities ($M_{UV} \\gtrsim-16$) and over-densities ($\\log_{10}(1+\\delta) \\sim 0-0.5$ on a $2$\u00a0comoving Mpc spatial scale); (iii) reionization imprints its topology on the ionizing emissivity of low-mass galaxies ($M_h\\lesssim10^{9}\\msun$) through radiative feedback. Low-mass galaxies experience a stronger suppression of star formation by radiative feedback and show lower ionizing emissivities in over-dense regions; (iv) a change in $f_\\mathrm{esc}$" +"---\nabstract: 'Preconditioning is the most widely used and effective way for treating ill-conditioned linear systems in the context of classical iterative linear system solvers. We introduce a quantum primitive called fast inversion, which can be used as a preconditioner for solving quantum linear systems. The key idea of fast inversion is to directly block-encode a matrix inverse through a quantum circuit implementing the inversion of eigenvalues via classical arithmetics. We demonstrate the application of preconditioned linear system solvers for computing single-particle Green\u2019s functions of quantum many-body systems, which are widely used in quantum physics, chemistry, and materials science. We analyze the complexities in three scenarios: the Hubbard model, the quantum many-body Hamiltonian in the planewave-dual basis, and the Schwinger model. [We also provide a method for performing Green\u2019s function calculation in second quantization within a fixed particle manifold and note that this approach may be valuable for simulation more broadly.]{} Besides solving linear systems, fast inversion also allows us to develop fast algorithms for computing matrix functions, such as the efficient preparation of Gibbs states. We introduce two efficient approaches for such a task, based on the contour integral formulation and the inverse transform respectively.'\nauthor:\n- 'Yu Tong[^1]'" +"---\nabstract: 'Transparent conductive oxides such as indium tin oxide (ITO) bear the potential to deliver efficient all-optical functionality due to their record-breaking optical nonlinearity at epsilon near zero (ENZ) wavelengths. All-optical applications generally involve more than one beam, but the coherent interaction between beams has not previously been discussed in materials with a hot electron nonlinearity. Here we study the optical nonlinearity at ENZ in ITO and show that spatial and temporal interference has important consequences in a two beam geometry. Our pump-probe results reveal a polarization-dependent transient that is explained by momentary diffraction of pump light into the probe direction by a temperature grating produced by pump-probe interference. We further show that this effect allows tailoring the nonlinearity by tuning frequency or chirp. Having fine control over the strong and ultrafast ENZ nonlinearity may enable applications in all-optical neural networks, nanophotonics, and spectroscopy.'\nauthor:\n- 'J. Paul'\n- 'M. Miscuglio'\n- 'Y. Gui'\n- 'V. J. Sorger'\n- 'J. K. Wahlstrand'\ntitle: 'Two-beam coupling by a hot electron nonlinearity'\n---\n\nRecent years have seen growing interest in the nonlinear optics of transparent conductive oxides (TCO) such as indium tin oxide (ITO) and aluminum zinc oxide [@alam_large_2016; @caspani_enhanced_2016; @clerici_controlling_2017;" +"---\nauthor:\n- 'K.\u00a0Pouilly'\n- 'J. Bouvier'\n- 'E.\u00a0Alecian'\n- 'S.H.P.\u00a0Alencar'\n- 'A.-M.\u00a0Cody'\n- 'J.-F.\u00a0Donati'\n- 'K.\u00a0Grankin'\n- 'G.A.J.\u00a0Hussain'\n- 'L.\u00a0Rebull'\n- 'C.P.\u00a0Folsom'\nbibliography:\n- 'hqtau.bib'\ndate: 'Received 3 April 2020; Accepted 3 August 2020'\ntitle: 'Magnetospheric accretion in the intermediate-mass T Tauri star HQ\u00a0Tau[^1]'\n---\n\n[Classical T Tauri stars (cTTs) are pre-main sequence stars surrounded by an accretion disk. They host a strong magnetic field, and both magnetospheric accretion and ejection processes develop as the young magnetic star interacts with its disk. Studying this interaction is a major goal toward understanding the properties of young stars and their evolution.]{} [The goal of this study is to investigate the accretion process in the young stellar system HQ Tau, an intermediate-mass T Tauri star (1.9\u00a0[${\\rm M}_\\odot$]{}). ]{} [The time variability of the system is investigated both photometrically, using Kepler-K2 and complementary light curves, and from a high-resolution spectropolarimetric time series obtained with ESPaDOnS at CFHT. ]{} [The quasi-sinusoidal Kepler-K2 light curve exhibits a period of 2.424\u00a0d, which we ascribe to the rotational period of the star. The radial velocity of the system shows the same periodicity, as expected from" +"---\nabstract: 'We explore quantum phase transitions in the spin-1/2 $XX$ chain with three-spin interaction in terms of local quantum Fisher information and one-way quantum deficit, together with the demonstration of quantum fluctuations. Analytical results are derived and analyzed in detail.'\nauthor:\n- 'Biao-Liang Ye'\n- Bo Li\n- 'Xiao-Bin Liang'\n- 'Shao-Ming Fei'\ntitle: 'Local quantum Fisher information and one-way quantum deficit in spin-$\\frac{1}{2}$ $XX$ Heisenberg chain with three-spin interaction'\n---\n\nIntroduction\n============\n\nQuantum entanglement plays a vital role in quantum information processing [@Horodecki2009]. As an important resource, quantum entangled states have been used in quantum teleportation [@Bennett1993], remote state preparation [@Bennett2001], secure quantum-communications network [@Yin2017], etc. Besides quantum entanglement, quantum discord characterizes non-classical correlations [@Modi2012]. The one-way quantum deficit [@Ye2016] is another key measure to describe quantum correlation [@Streltsov2011]. While the quantum Fisher information [@Petz2002; @Ye2018b] is important in the estimation accuracy scenarios.\n\nOn the other hand, the quantum phase transitions have received much attention in condensed matter physics [@Sachdev1999]. The quantum fluctuations are able to be illustrated by quantum correlations. In Ref. [@Osterloh2002] the role of entanglement played in phase transition and theory of critical phenomena in $XY$ system has been investigated. The quantum discord and entanglement" +"---\nabstract: 'Surgical skill assessment is important for surgery training and quality control. Prior works on this task largely focus on basic surgical tasks such as suturing and knot tying performed in simulation settings. In contrast, surgical skill assessment is studied in this paper on a real clinical dataset, which consists of fifty-seven in-vivo laparoscopic surgeries and corresponding skill scores annotated by six surgeons. From analyses on this dataset, the clearness of operating field (COF) is identified as a good proxy for overall surgical skills, given its strong correlation with overall skills and high inter-annotator consistency. Then an objective and automated framework based on neural network is proposed to predict surgical skills through the proxy of COF. The neural network is jointly trained with a supervised regression loss and an unsupervised rank loss. In experiments, the proposed method achieves 0.55 Spearman\u2019s correlation with the ground truth of overall technical skill, which is even comparable with the human performance of junior surgeons.'\nauthor:\n- Daochang Liu\n- Tingting Jiang\n- Yizhou Wang\n- Rulin Miao\n- Fei Shan\n- Ziyu Li\nbibliography:\n- 'mybib.bib'\ntitle: 'Surgical Skill Assessment on In-Vivo Clinical Data via the Clearness of Operating Field'\n---\n\nIntroduction\n============" +"---\nabstract: 'We present numerical homotopy continuation algorithms for solving systems of equations on a variety in the presence of a finite Khovanskii basis. These take advantage of Anderson\u2019s flat degeneration to a toric variety. When Anderson\u2019s degeneration embeds into projective space, our algorithm is a special case of a general toric two-step homotopy algorithm. When Anderson\u2019s degeneration is embedded in a weighted projective space, we explain how to lift to a projective space and construct an appropriate modification of the toric homotopy. Our algorithms are illustrated on several examples using `Macaulay2`.'\naddress:\n- 'Michael Burr, School of Mathematical and Statistical Sciences, Clemson University, 220 Parkway Drive, Clemson, SC 29634-0975, USA'\n- 'Frank Sottile, Department of Mathematics, Texas A&M University, College Station, Texas 77843, USA'\n- 'Elise Walker, Department of Mathematics, Texas A&M University, College Station, Texas 77843, USA'\nauthor:\n- 'M.\u00a0Burr'\n- 'F.\u00a0Sottile'\n- 'E.\u00a0Walker'\nbibliography:\n- 'bibl.bib'\ntitle: Numerical homotopies from Khovanskii bases\n---\n\nWe consider the problem of computing the isolated solutions to the system $$\\label{Eq:generalSystem}\n f_1(z)=f_2(z)=\\dotsb=f_d(z)=0,$$ where $f_1,\\dots,f_d$ are general members of a finite-dimensional vector space $V$ of rational functions on a complex algebraic variety $X$ of dimension $d$. Kaveh-Khovanskii\u00a0[@KaKha; @KaKhb] and" +"---\nabstract: |\n We investigate features of the sterile neutrinos in the presence of a light gauge boson $X^\\mu$ that couples to the neutrino sector. The novel bounds on the active-sterile neutrino mixings $| U_{\\ell 4} |^2$, especially for tau flavor ($l = \\tau$), from various collider and fixed target experiments are explored. Also, taking into account the additional decay channel of the sterile neutrino into a light gauge boson ($\\nu_4 \\to %\\nu_\\ell + X/X^{\\color{blue} *} \\to \n \\nu_\\ell e^+ e^-$), we explore and constrain a parameter space for low energy excess in neutrino oscillation experiments.\nauthor:\n- Yongsoo Jho\n- Jongkuk Kim\n- Pyungwon Ko\n- Seong Chan Park\nbibliography:\n- 'biblio.bib'\ntitle: |\n Search for sterile neutrino with light gauge interactions:\\\n recasting collider, beam-dump, and neutrino telescope searches\n---\n\nIntroduction\n============\n\nThe sterile neutrinos having no known non-gravitational couplings with the standard model (SM) particles have been seriously considered to interpret the recent observational anomalies in the neutrino oscillations, such as Low Energy Excess (LEE) reported from MiniBooNE\u00a0[@AguilarArevalo:2007it; @AguilarArevalo:2008rc; @AguilarArevalo:2010wv; @Aguilar-Arevalo:2013pmq; @Aguilar-Arevalo:2018gpe; @Aguilar-Arevalo:2020nvw] and LSND [@Athanassopoulos:1995iw; @Athanassopoulos:1996jb; @Athanassopoulos:1997pv; @Athanassopoulos:1996wc] experiments. They are also important targets to be discovered in fixed target and collider experiments\u00a0[@Ilten:2018crw; @Bauer:2018onh] when sterile" +"---\nabstract: 'In this paper, we study the cardinality constrained mean-variance-skewness-kurtosis (MVSKC) model for sparse high-order portfolio optimization. The MVSKC model is computationally challenging, as the objective function is non-convex and the cardinality constraint is discontinuous. Since the cardinality constraint has the difference-of-convex (DC) property, we transform it into a penalty term and then propose three algorithms, namely the proximal difference-of-convex algorithm (pDCA), pDCA with extrapolation (pDCAe), and the successive convex approximation (SCA), to handle the resulting penalized mean-variance-skewness-kurtosis (PMVSK) formulation. Moreover, we establish theoretical convergence results for pDCA and SCA. Numerical experiments on a real dataset demonstrate the superiority of our proposed methods in obtaining better objective values and sparser solutions efficiently.'\naddress: |\n $^{\\dag}$Department of Systems Engineering and Engineering Management, CUHK, Hong Kong SAR, China\\\n $^{\\ddag}$Cainiao Network, Hangzhou, China\nbibliography:\n- 'refs.bib'\ntitle: 'Sparse High-Order Portfolios via Proximal DCA and SCA'\n---\n\nHigh-order portfolios, cardinality constraint, difference-of-convex, successive convex approximation\n\nIntroduction\n============\n\nPortfolio management is a fundamental and challenging task for investors. One significant progress was made by Markowitz, who developed the mean-variance (MV) framework\u00a0[@markowitz19521952]. In the MV framework, the investors\u2019 purpose is to maximize their expected profit (i.e., mean return rate, or first moment) and minimize" +"---\nabstract: '[ Benford\u2019s law states that for scale- and base-invariant data sets covering a wide dynamic range, the distribution of the first significant digit is biased towards low values. This has been shown to be true for wildly different datasets, including financial, geographical, and atomic data. In astronomy, earlier work showed that Benford\u2019s law also holds for distances estimated as the inverse of parallaxes from the ESA [*Hipparcos*]{} mission. ]{} [ We investigate whether Benford\u2019s law still holds for the 1.3\u00a0billion parallaxes contained in the second data release of [*Gaia*]{} ([*Gaia*]{} DR2). In contrast to previous work, we also include negative parallaxes. We examine whether distance estimates computed using a Bayesian approach instead of parallax inversion still follow Benford\u2019s law. Lastly, we investigate the use of Benford\u2019s law as a validation tool for the zero-point of the [*Gaia*]{} parallaxes. ]{} [ We computed histograms of the observed most significant digit of the parallaxes and distances, and compared them with the predicted values from Benford\u2019s law, as well as with theoretically expected histograms. The latter were derived from a simulated [*Gaia*]{} catalogue based on the Besan\u00e7on galaxy model. ]{} [ The observed parallaxes in [*Gaia*]{} DR2 indeed follow Benford\u2019s" +"---\nabstract: 'The query model (or black-box model) has attracted much attention from the communities of both classical and quantum computing. Usually, quantum advantages are revealed by presenting a quantum algorithm that has a better query complexity than its classical counterpart. For example, the well-known quantum algorithms including Deutsch-Jozsa algorithm, Simon algorithm and Grover algorithm all show a considerable advantage of quantum computing from the viewpoint of query complexity. Recently we have considered in (Phys. Rev. A. [**101**]{}, 02232 (2020)) the problem: what functions can be computed by an exact one-query quantum algorithm? This problem has been addressed for total Boolean functions but still open for partial Boolean functions. Thus, in this paper we continue to characterize the computational power of exact one-query quantum algorithms for partial Boolean functions by giving several necessary and sufficient conditions. By these conditions, we construct some new functions that can be computed exactly by one-query quantum algorithms but have essential difference from the already known ones. Note that before our work, the known functions that can be computed by exact one-query quantum algorithms are all symmetric functions, whereas the ones constructed in this papers are generally asymmetric.'\nauthor:\n- 'Zekun Ye$^{1}$, Lvzhou Li$^{1, 2," +"---\nabstract: 'The driven-dissipative nature of light-matter interaction inside a multimode, dye-filled microcavity makes it an ideal system to study nonequilibrium phenomena, such as transport. In this work, we investigate how light is efficiently transported inside such a microcavity, mediated by incoherent absorption and emission processes. In particular, we show that there exist two distinct regimes of transport, viz. conductive and localized, arising from the complex interplay between the thermalizing effect of the dye molecules and the nonequilibrium influence of driving and loss. The propagation of light in the conductive regime occurs when several localized cavity modes undergo dynamical phase transitions to a condensed, or lasing, state. Further, we observe that while such transport is robust for weak disorder in the cavity potential, strong disorder can lead to localization of light even under good thermalizing conditions. Importantly, the exhibited transport and localization of light is a manifestation of the nonequilibrium dynamics rather than any coherent interference in the system.'\nauthor:\n- |\n Himadri S. Dhar$^1$, Jo[\u00e3]{}o D. Rodrigues$^1$, Benjamin T. Walker$^{1,2}$,\\\n Rupert F. Oulton$^1$, Robert A. Nyman$^1$, and Florian Mintert$^1$\ntitle: 'Transport and localization of light inside a dye-filled microcavity'\n---\n\nIntroduction \\[intro\\]\n======================\n\nIn recent years, substantial research effort" +"---\nabstract: 'In this note we investigate outcomes of a symplectic formula for the gravitational waves charges in the general relativity linearized around the de Sitter spacetime. We derive their explicit form at [*scri*]{} in the Bondi frame, compare with the connected Noether expression and analyze their gauge dependence which allows us to fix unambiguously boundary terms. We also discuss minimal requirements needed to impose on initial data to have finite values of charges. Furthermore, we analyze transformation laws of the energy upon the action of the de Sitter group and discuss its physical interpretation. Finally, we calculate its flux through a cosmological horizon instead of [*scri*]{}. We show that in the limit $\\Lambda \\to 0$, one recovers Trautman\u2013Bondi formula strengthening recent proposal that one should choose a\u00a0null surface as a more natural boundary for the astrophysical systems in the presence of the cosmological constant.'\naddress:\n- ' Institute of Theoretical Physics, Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw, Poland'\n- ' Institute of Theoretical Physics, Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw, Poland'\nauthor:\n- Maciej Kolanowski\n- Jerzy Lewandowski\nbibliography:\n- 'bibl.bib'\ntitle: Energy of gravitational radiation in the de Sitter" +"---\nabstract: 'We present a communication-efficient distributed protocol for computing the Babai point, an approximate nearest point for a random vector ${\\bf X}\\in\\mathbb{R}^n$ in a given lattice. We show that the protocol is optimal in the sense that it minimizes the sum rate when the components of ${\\mathbf{X}}$ are mutually independent. We then investigate the error probability, i.e. the probability that the Babai point does not coincide with the nearest lattice point. In dimensions two and three, this probability is seen to grow with the packing density. For higher dimensions, we use a bound from probability theory to estimate the error probability for some well-known lattices. Our investigations suggest that for uniform distributions, the error probability becomes large with the dimension of the lattice, for lattices with good packing densities. We also consider the case where ${\\mathbf{X}}$ is obtained by adding Gaussian noise to a randomly chosen lattice point. In this case, the error probability goes to zero with the lattice dimension when the noise variance is sufficiently small. In such cases, a distributed algorithm for finding the approximate nearest lattice point is sufficient for finding the nearest lattice point.'\nauthor:\n- 'Maiara\u00a0F.\u00a0Bollauf, Vinay\u00a0A.\u00a0Vaishampayan, and\u00a0Sueli" +"---\nauthor:\n- Jashwant Raj Gunasekaran\n- Prashanth Thinakaran\n- Nachiappan Chidambaram\n- 'Mahmut T. Kandemir'\n- 'Chita R. Das'\nbibliography:\n- 'references.bib'\ntitle: 'Fifer: Tackling Underutilization in the Serverless Era'\n---\n\nIntroduction {#sec:intro}\n============\n\nThe advent of public clouds in the last decade has led to the explosion in the use of microservice-based applications\u00a0[@gan2019open]. Large cloud-based companies like Amazon\u00a0[@aws], Facebook\u00a0[@facebook], Twitter\u00a0[@Twitter], and Netflix\u00a0[@netflix] have capitalized on the ease of scalability and development offered by microservices, embracing it as a first-class application model\u00a0[@7515686]. For instance, a wide range of Machine Learning (ML) applications such as facial recognition\u00a0[@bartlett2005recognizing], virtual systems\u00a0[@sirius], content recommendation\u00a0[@hazelwood2018applied], etc., are realized as a series of inter-linked microservices[^1], also known as *microservice-chains*\u00a0[@wechat; @8486300]. These applications are user-facing\u00a0[@8675201] and hence, demand a strict service-level objective (SLO), which is usually under `1000 ms`\u00a0[@swayam; @p1SLO; @p3SLO]. It is, therefore, imperative to mitigate the end-to-end latency of a microservice-chain to provide a satisfactory user experience. The SLOs for such microservices are bounded by two factors \u2013 (i) resource provisioning latency, and (ii) application execution time. As a majority of these microservices usually execute within a few milliseconds\u00a0[@djinn; @sirius], *serverless" +"---\nauthor:\n- 'Laura Covi,'\n- 'Avirup Ghosh,'\n- 'Tanmoy Mondal,'\n- Biswarup Mukhopadhyaya\nbibliography:\n- 'Dmwtz2.bib'\ntitle: 'Models of decaying FIMP Dark Matter: potential links with the Neutrino Sector'\n---\n\nIntroduction {#sec:intro}\n============\n\nDark matter (DM) is an undeniable component of the universe today, playing a fundamental role in structure formation and in explaining galactic rotation curves and other astrophysical and cosmological observations\u00a0[@Ade:2015xua]. Assuming a $\\mathbb{Z}_2$ symmetry is a frequently adopted practice in ensuring a stable particle in the elementary particle spectrum, which can account for dark matter (DM) in our universe. In special cases like the minimal supersymmetric standard model (MSSM) lepton and baryon number conservation and stability of the proton may (though somewhat grudgingly) be taken as facts supported by experiments. In general, however, such broader theoretical motivation for $\\mathbb{Z}_2$ symmetries are difficult to find. Furthermore, global symmetries are not likely to be respected by quantum gravity [@Banks:2010zn; @Mambrini:2015sia; @Harlow:2018jwu]. Thus even a scenario that is $\\mathbb{Z}_2$-symmetric at low energy may permit very small violation of the discrete symmetry, when one takes its UV-completion into account.\n\nOn the other hand, dark matter does not have to be absolutely stable. Indeed it is possible that the dark" +"---\nabstract: 'We introduce a theoretical framework for resource-efficient characterization and control of non-Markovian open quantum systems, which naturally allows for the integration of given, experimentally motivated, control capabilities and constraints. This is achieved by developing a transfer filter-function formalism based on the general notion of a [*frame*]{} and by appropriately tying the choice of frame to the available control. While recovering the standard frequency-based filter-function formalism as a special instance, this *control-adapted* generalization affords intrinsic flexibility and, crucially, it permits an efficient representation of the relevant control matrix elements and dynamical integrals if an appropriate *finite-size frame condition* is obeyed. Our frame-based formulation overcomes important limitations of existing approaches. In particular, we show how to implement quantum noise spectroscopy in the presence of *non-stationary* noise sources, and how to effectively achieve *control-driven model reduction* for noise-optimized prediction and quantum gate design.'\nauthor:\n- Teerawat Chalermpusitarak\n- Behnam Tonekaboni\n- \n- 'Leigh M. Norris'\n- Lorenza Viola\n- 'Gerardo A. Paz-Silva'\ntitle: 'Frame-Based Filter-Function Formalism for Quantum Characterization and Control'\n---\n\n[^1]\n\n[^2]\n\n[^3]\n\n[^4]\n\n[^5]\n\nIntroduction\n============\n\nAccurate characterization and control (C&C) of open quantum systems coupled to realistic \u2013 temporally correlated (\u201cnon-Markovian\u201d) \u2013 noise environments are vital for" +"---\nauthor:\n- |\n Paraskevi Chasani and Aristidis Likas$$ [^1]\\\n Department of Computer Science and Engineering\\\n University of Ioannina\\\n GR 45110, Ioannina, Greece\\\n e-mail: {pchasani, arly}@cs.uoi.gr\nbibliography:\n- 'arxiv-manuscript.bib'\ntitle: 'The UU-test for Statistical Modeling of Unimodal Data'\n---\n\n------------------------------------------------------------------------\n\n**Abstract**\n\nDeciding on the unimodality of a dataset is an important problem in data analysis and statistical modeling. It allows to obtain knowledge about the structure of the dataset, i.e. whether data points have been generated by a probability distribution with a single or more than one peaks. Such knowledge is very useful for several data analysis problems, such as for deciding on the number of clusters and determining unimodal projections. We propose a technique called UU-test (Unimodal Uniform test) to decide on the unimodality of a one-dimensional dataset. The method operates on the empirical cumulative density function (ecdf) of the dataset. It attempts to build a piecewise linear approximation of the ecdf that is unimodal and models the data sufficiently in the sense that the data corresponding to each linear segment follows the uniform distribution. A unique feature of this approach is that in the case of unimodality, it also provides a statistical model of the data in the" +"---\nabstract: 'Motivated by the recent discovery of a large anomalous Nernst effect in Co$_2$MnGa, Fe$_3X$ ($X$=Al, Ga) and Co$_3$Sn$_2$S$_2$, we performed a first-principles study to clarify the origin of the enhancement of the transverse thermoelectric conductivity ($\\alpha_{ij}$) in these ferromagnets. The intrinsic contribution to $\\alpha_{ij}$ can be understood in terms of the Berry curvature ($\\Omega$) around the Fermi level, and $\\Omega$ is singularly large along nodal lines (which are gapless in the absence of the spin-orbit coupling) in the Brillouin zone. We find that not only the Weyl points but also stationary points in the energy dispersion of the nodal lines play a crucial role. The stationary points make sharp peaks in the density of states projected onto the nodal line, clearly identifying the characteristic Fermi energies at which $\\alpha_{ij}$ is most dramatically enhanced. We also find that $\\alpha_{ij}/T$ breaks the Mott relation and show a peculiar temperature dependence at these energies. The present results suggest that the stationary points will give us a useful guiding principle to design magnets showing a large anomalous Nernst effect.'\nauthor:\n- Susumu Minami\n- Fumiyuki Ishii\n- Motoaki Hirayama\n- Takuya Nomoto\n- Takashi Koretsune\n- Ryotaro Arita\nbibliography:\n- 'ref/ref.bib'\ntitle:" +"---\nabstract: 'We explore the significance of bars in triggering central star formation (SF) and AGN activity for spiral galaxy evolution using a volume-limited sample with $0.02070{{\\rm~km~s^{-1}}}$ selected from SDSS DR7. On a central SF rate-$\\sigma$ plane, we measure the fraction of galaxies with strong bars in our sample and also the AGN fractions for barred and non-barred galaxies, respectively. The comparison between the bar and AGN fractions reveals a causal connection between the two phenomena of SF quenching and AGN activity. A massive BH and abundant gas fuels are sufficient conditions to trigger AGNs. We infer that the AGNs triggered by satisfying the two conditions drive the strong AGN feedback, suddenly suppressing the central SF and leaving the SF sequence. We find that in galaxies where either of the two conditions is not sufficient, bars are a great help for the AGN triggering, accelerating the entire process of evolution, which is particularly evident in pseudo-bulge galaxies. All of our findings are obtained only when plotted in terms of their central velocity dispersion and central SFR (not galactic scale SFR), indicating that the AGN-driven SF quenching is confined in the central kpc region.'\nauthor:\n- |" +"---\nabstract: 'Temporal Neural Networks (TNNs) use time as a resource to represent and process information, mimicking the behavior of the mammalian neocortex. This work focuses on implementing TNNs using off-the-shelf digital CMOS technology. A microarchitecture framework is introduced with a hierarchy of building blocks including: multi-neuron *columns*, multi-column *layers*, and multi-layer TNNs. We present the *direct* CMOS gate-level implementation of the multi-neuron column model as the key building block for TNNs. Post-synthesis results are obtained using Synopsys tools and the 45 nm CMOS standard cell library. The TNN microarchitecture framework is embodied in a set of characteristic equations for assessing the total gate count, die area, compute time, and power consumption for any TNN design. We develop a multi-layer TNN prototype of 32M gates. In 7 nm CMOS process, it consumes only 1.54 mm^2^ die area and 7.26 mW power and can process 28x28 images at 107M FPS (9.34 ns per image). We evaluate the prototype\u2019s performance and complexity relative to a recent state-of-the-art TNN model.'\nauthor:\n- Harideep Nair\n- John Paul Shen\n- 'James E. Smith'\nbibliography:\n- 'main\\_v3.bib'\ntitle: Direct CMOS Implementation of Neuromorphic Temporal Neural Networks for Sensory Processing\n---\n\nIntroduction\n============\n\nTemporal Neural Networks" +"---\nabstract: 'We theoretically study the angular displacements estimation based on a modified Mach-Zehnder interferometer (MZI), in which two optical parametric amplifiers (PAs) are introduced into two arms of the standard MZI, respectively. The employment of PAs can both squeeze the shot noise and amplify the photon number inside the interferometer. When the unknown angular displacements are introduced to both arms, we derive the multiparameter quantum Cram\u00e9r-Rao bound (QCRB) using the quantum Fisher information matrix approach, and the bound of angular displacements difference between the two arms is compared with the sensitivity of angular displacement using the intensity detection. On the other hand, in the case where the unknown angular displacement is in only one arm, we give the sensitivity of angular displacement using the method of homodyne detection. It can surpass the standard quantum limit (SQL) and approach the single parameter QCRB. Finally, the effect of photon losses on sensitivity is discussed.'\naddress: |\n State Key Laboratory of Precision Spectroscopy, Quantum Institute for Light and Atoms, Department of Physics, East China Normal University, Shanghai 200062, China\\\n School of Physics and Astronomy, and Tsung-Dao Lee Institute, Shanghai Jiao Tong University, Shanghai 200240, China\\\n Collaborative Innovation Center of Extreme Optics, Shanxi" +"---\nabstract: 'Liquid crystals (LCs) can host robust topological defect structures that essentially determine their optical and elastic properties. Although recent experimental progress enables precise control over localization and dynamics of nematic LC defects, their practical potential for information storage and processing has yet to be explored. Here, we introduce the concept of nematic bits (nbits) by exploiting a quaternionic mapping from LC defects to the Poincar\u00e9-Bloch sphere. Through theory and simulations, we demonstrate how single-nbit operations can be implemented using electric fields, in close analogy with Pauli, Hadamard and other common quantum gates. Ensembles of two-nbit states can exhibit strong statistical correlations arising from nematoelastic interactions, which can be used as a computational resource. Utilizing nematoelastic interactions, we show how suitably arranged 4-nbit configurations can realize universal classical NOR and NAND gates. Finally, we demonstrate the implementation of generalized logical functions that take values on the Poincar\u00e9-Bloch sphere. These results open a new route towards the implementation of classical and non-classical computation strategies in topological soft matter systems.'\nauthor:\n- \u017diga Kos\n- J\u00f6rn Dunkel\ntitle: Nematic bits and universal logic gates\n---\n\nBits are the fundamental units of binary digital computation and information storage. Similar to an idealized" +"---\nabstract: 'We introduce a generalized version of the concave Kurdyka-\u0141ojasiewicz (KL) property by employing nonsmooth desingularizing functions. We also present the exact modulus of the generalized concave KL property, which provides an answer to the open question regarding the optimal concave desingularizing function. The exact modulus is designed to be the smallest among all possible concave desingularizing functions. Examples are given to illustrate this pleasant property. In turn, using the exact modulus we provide the sharpest upper bound for the total length of iterates generated by the celebrated Bolte-Sabach-Teboulle PALM algorithm.'\nauthor:\n- 'Xianfu\u00a0Wang[^1] and Ziyuan Wang[^2]'\nbibliography:\n- 'KL\\_modulus\\_reference.bib'\ntitle: '[The exact modulus of the generalized concave Kurdyka-\u0141ojasiewicz property]{}'\n---\n\n[**2010 Mathematics Subject Classification:**]{} Primary 49J52, 26D10, 90C26; Secondary 26A51, 26B25.\n\n[**Keywords:**]{} Generalized concave Kurdyka-\u0141ojasiewicz property, Kurdyka-\u0141ojasiewicz property, optimal concave desingularizing function, Bolte-Daniilidis-Ley-Mazet desingularizing function, proximal alternating linearized minimization, nonconvex optimization.\n\nIntroduction {#Intro}\n============\n\nThe continuous optimization community has witnessed a surging interest of employing the concave KL property (see Definition\u00a0\\[Def:KL property\\]) to solve problems from various applications, such as image processing\u00a0[@Ipiano2016; @Banert2019], compressed sensing\u00a0[@TKP2017extra; @yu2020convergence; @TKP2019], machine learning\u00a0[@Lange2019] and many more. The aforementioned work, despite devoting to different proximal-type algorithms, share a" +"---\nabstract: 'Data from gravitational wave detectors are recorded as time series that include contributions from myriad noise sources in addition to any gravitational wave signals. When regularly sampled data are available, such as for ground based and future space based interferometers, analyses are typically performed in the frequency domain, where stationary (time invariant) noise processes can be modeled very efficiently. In reality, detector noise is not stationary due to a combination of short duration noise transients and longer duration drifts in the power spectrum. This non-stationarity produces correlations across samples at different frequencies, obviating the main advantage of a frequency domain analysis. Here an alternative time-frequency approach to gravitational wave data analysis is proposed that uses discrete, orthogonal wavelet wavepackets. The time domain data is mapped onto a uniform grid of time-frequency pixels. For locally stationary noise - that is, noise with an adiabatically varying spectrum - the time-frequency pixels are uncorrelated, which greatly simplifies the calculation of quantities such as the likelihood. Moreover, the gravitational wave signals from binary systems can be compactly represented as a collection of lines in time-frequency space, resulting in a computational cost for computing waveforms and likelihoods that scales as the square root" +"---\nabstract: |\n Higher-order accuracy (order of $k+1$ in the $L^2$ norm) is one of the well known beneficial properties of the discontinuous Galerkin (DG) method. Furthermore, many studies have demonstrated the superconvergence property (order of $2k+1$ in the negative norm) of the semi-discrete DG method. One can take advantage of this superconvergence property by post-processing techniques to enhance the accuracy of the DG solution. A popular class of post-processing techniques to raise the convergence rate from order $k+1$ to order $2k+1$ in the $L^2$ norm is the Smoothness-Increasing Accuracy-Conserving (SIAC) filtering. In addition to enhancing the accuracy, the SIAC filtering also increases the inter-element smoothness of the DG solution. The SIAC filtering was introduced for the DG method of the linear hyperbolic equation by Cockburn et al. in 2003. Since then, there are many generalizations of the SIAC filtering have been proposed. However, the development of SIAC filtering has never gone beyond the framework of using spline functions (mostly B-splines) to construct the filter function. In this paper, we first investigate the general basis function (beyond the spline functions) that can be used to construct the SIAC filter. The studies of the general basis function relax the SIAC filter" +"---\nabstract: 'Numerous studies have investigated the effectiveness of audio-visual multimodal learning for speech enhancement (AVSE) tasks, seeking a solution that uses visual data as auxiliary and complementary input to reduce the noise of noisy speech signals. Recently, we proposed a lite audio-visual speech enhancement (LAVSE) algorithm for a car-driving scenario. Compared to conventional AVSE systems, LAVSE requires less online computation and to some extent solves the user privacy problem on facial data. In this study, we extend LAVSE to improve its ability to address three practical issues often encountered in implementing AVSE systems, namely, the additional cost of processing visual data, audio-visual asynchronization, and low-quality visual data. The proposed system is termed improved LAVSE (iLAVSE), which uses a convolutional recurrent neural network architecture as the core AVSE model. We evaluate iLAVSE on the Taiwan Mandarin speech with video dataset. Experimental results confirm that compared to conventional AVSE systems, iLAVSE can effectively overcome the aforementioned three practical issues and can improve enhancement performance. The results also confirm that iLAVSE is suitable for real-world scenarios, where high-quality audio-visual sensors may not always be available.'\nauthor:\n- 'Shang-Yi Chuang, Hsin-Min Wang,\u00a0 Yu Tsao,\u00a0[^1] [^2]'\nbibliography:\n- 'refs.bib'\ntitle: 'Improved Lite Audio-Visual" +"---\nabstract: 'A dynamic wetting problem is studied for a moving thin fiber inserted in fluid and with a chemically inhomogeneous surface. A reduced model is derived for contact angle hysteresis by using the Onsager principle as an approximation tool. The model is simple and captures the essential dynamics of the contact angle. From this model we derive an upper bound of the advancing contact angle and a lower bound of the receding angle, which are verified by numerical simulations. The results are consistent with the quasi-static results. The model can also be used to understand the asymmetric dependence of the advancing and receding contact angles on the fiber velocity, which is observed recently in physical experiments reported in [*Guan et al Phys. Rev. Lett. 2016*]{}.'\nauthor:\n- Xianmin Xu\n- Xiaoping Wang\nbibliography:\n- 'literW.bib'\ntitle: Theoretical analysis for dynamic contact angle hysteresis on chemically patterned surfaces\n---\n\nIntroduction\n============\n\nWetting is a common phenomenon in nature and our daily life. It is a fundamental problem with applications in many industrial processes, like coating, printing and oil industry, etc. In equilibrium state, wetting on smooth homogeneous surfaces can be described by the Young\u2019s equation [@Young1805]. It becomes much more" +"---\nabstract: |\n Based on its off-diagonal Bethe ansatz solution, we study the thermodynamic limit of the spin-$\\frac{1}{2}$ XYZ spin chain with the antiperiodic boundary condition. The key point of our method is that there exist some degenerate points of the crossing parameter $\\eta_{m,l}$, at which the associated inhomogeneous $T-Q$ relation becomes a homogeneous one. This makes extrapolating the formulae deriving from the homogeneous one to an arbitrary $\\eta$ with $O(N^{-2})$ corrections for a large $N$ possible. The ground state energy and elementary excitations of the system are obtained. By taking the trigonometric limit, we also give the results of antiperiodic XXZ spin chain within the gapless region in the thermodynamic limit, which does not have any degenerate points.\n\n [*PACS:*]{} 75.10.Pq, 02.30.Ik, 71.10.Pm\n\n [*Keywords*]{}: Bethe Ansatz; Lattice Integrable Models; $T-Q$ Relation\nauthor:\n- |\n Zhirong Xin${}^{a}$, Yusong Cao${}^{b,c}$, Xiaotian Xu${}^{b}\\footnote{Corresponding author:\n xtxu@nwu.edu.cn}$, Tao Yang${}^{d,e,f}$,\\\n Junpeng Cao${}^{b,c,f,g}$ and Wen-Li Yang${}^{d,e,f}$\ntitle: 'Thermodynamic limit of the spin-$\\frac{1}{2}$ XYZ spin chain with the antiperiodic boundary condition'\n---\n\n${}^a$ School of Physics and Electronic Information, Baicheng Normal University, China\\\n${}^b$ Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China\\\n${}^c$ Songshan Lake Materials Laboratory, Dongguan, Guangdong" +"---\nabstract: |\n Predicting future frames of a video sequence has been a problem of high interest in the field of Computer Vision as it caters to a multitude of applications. The ability to predict, anticipate and reason about future events is the essence of intelligence and one of the main goals of decision-making systems such as human-machine interaction, robot navigation and autonomous driving. However, the challenge lies in the ambiguous nature of the problem as there may be multiple future sequences possible for the same input video shot. A naively designed model averages multiple possible futures into a single blurry prediction.\n\n Recently, two distinct approaches have attempted to address this problem as: (a) use of latent variable models that represent underlying stochasticity and (b) adversarially trained models that aim to produce sharper images. A latent variable model often struggles to produce realistic results, while an adversarially trained model underutilizes latent variables and thus fails to produce diverse predictions. These methods have revealed complementary strengths and weaknesses. Combining the two approaches produces predictions that appear more realistic and better cover the range of plausible futures. This forms the basis and objective of study in this project work.\n\n In this paper," +"---\nabstract: 'In this paper we study variational inequalities (VI) defined by the conditional value-at-risk (CVaR) of uncertain functions. We introduce stochastic approximation schemes that employ an empirical estimate of the CVaR at each iteration to solve these VIs. We investigate convergence of these algorithms under various assumptions on the monotonicity of the VI and accuracy of the CVaR estimate. Our first algorithm is shown to converge to the exact solution of the VI when the estimation error of the CVaR becomes progressively smaller along any execution of the algorithm. When the estimation error is nonvanishing, we provide two algorithms that provably converge to a neighborhood of the solution of the VI. For these schemes, under strong monotonicity, we provide an explicit relationship between sample size, estimation error, and the size of the neighborhood to which convergence is achieved. A simulation example illustrates our theoretical\u00a0findings.'\nauthor:\n- 'Jasper Verbree Ashish Cherukuri [^1]'\nbibliography:\n- 'bibfile.bib'\ntitle: 'Stochastic approximation for CVaR-based variational inequalities'\n---\n\nIntroduction {#section introduciton}\n============\n\nVariational inequality (VI) problems find application in a broad range of areas\u00a0[@FF-JSP:03], e.g., in game theory, under mild conditions, solutions to a VI correspond to Nash equilibria of a game. Similarly," +"---\nabstract: 'We demonstrate the fabrication of ultra-low-loss, all-fiber Fabry-P\u00e9rot cavities containing a nanofiber section, optimized for cavity quantum electrodynamics. By continuously monitoring the finesse and fiber radius during fabrication of a nanofiber between two fiber Bragg gratings, we are able to precisely evaluate taper transmission as a function of radius. The resulting cavities have an internal round-trip loss of only 0.31% at a nanofiber waist radius of 207\u00a0nm, with a total finesse of 1380, and a maximum expected internal cooperativity of $\\sim$\u00a01050 for a cesium atom on the nanofiber surface. Our ability to fabricate such high-finesse nanofiber cavities may open the door for the realization of high-fidelity scalable quantum networks.'\nauthor:\n- 'Samuel K. Ruddell'\n- 'Karen E. Webb'\n- Mitsuyoshi Takahata\n- Shinya Kato\n- Takao Aoki\ntitle: 'Ultra-low-loss nanofiber Fabry-P\u00e9rot cavities optimized for cavity quantum electrodynamics'\n---\n\nCavity quantum electrodynamics (CQED) provides a robust platform for the implementation of quantum nodes, which could form the basis of a scalable quantum network\u00a0[@kimble08; @reiserer15]. To maximize the efficiency and fidelity of quantum operations at these nodes, the fabrication of optical cavities having high cooperativity, determined by low loss and high atom\u2013cavity coupling strength, is required\u00a0[@reiserer15]." +"---\nabstract: 'Computer-aided diagnosis (CAD) has long become an integral part of radiological management of breast disease, facilitating a number of important clinical applications, including quantitative assessment of breast density and early detection of malignancies based on X-ray mammography. Common to such applications is the need to automatically discriminate between breast tissue and adjacent anatomy, with the latter being predominantly represented by pectoralis major (or pectoral muscle). Especially in the case of mammograms acquired in the mediolateral oblique (MLO) view, the muscle is easily confusable with some elements of breast anatomy due to their morphological and photometric similarity. As a result, the problem of automatic detection and segmentation of pectoral muscle in MLO mammograms remains a challenging task, innovative approaches to which are still required and constantly searched for. To address this problem, the present paper introduces a two-step segmentation strategy based on a combined use of data-driven prediction (deep learning) and graph-based image processing. In particular, the proposed method employs a convolutional neural network (CNN) which is designed to predict the location of breast-pectoral boundary at different levels of spatial resolution. Subsequently, the predictions are used by the second stage of the algorithm, in which the desired boundary is" +"---\nabstract: 'Let $G$ be one of the classical groups of Lie rank $l$. We make a similar construction of a general extension field in differential Galois theory for $G$ as E. Noether did in classical Galois theory for finite groups. More precisely, we build a differential field $E$ of differential transcendence degree $l$ over the constants on which the group $G$ acts and show that it is a Picard-Vessiot extension of the field of invariants $E^G$. The field $E^G$ is differentially generated by $l$ differential polynomials which are differentially algebraically independent over the constants. They are the coefficients of the defining equation of the extension. Finally we show that our construction satisfies generic properties for a specific kind of $G$-primitive Picard-Vessiot extensions.'\naddress: |\n Universit\u00e4t Kassel\\\n Fachbereich 10\\\n Heinrich Plett Str. 40\\\n 34132 Kassel\\\n Germany.\nauthor:\n- Matthias Seiss\nbibliography:\n- 'main.bib'\ntitle: On General Extension Fields for the Classical Groups in Differential Galois Theory\n---\n\nIntroduction\n============\n\nIn classical Galois theory there is a well-known construction of the general equation with Galois group the symmetric group $S_n$. As a starting point one takes $n$ indeterminates $\\boldsymbol{T}=(T_1,\\dots,T_n)$ and considers the rational function field $\\mathbb{Q}(\\boldsymbol{T})$. The group $S_n$ acts on" +"---\nauthor:\n- Paolo Marcoccia\n- ', Felicia Fredriksson'\n- ', Alex B. Nielsen'\n- ', Germano Nardini'\nbibliography:\n- 'bibfile.bib'\ndate: August 2020\ntitle: 'Pearson cross-correlation in the first four black hole binary mergers'\n---\n\nIntroduction\n============\n\nThe detection of gravitational waves (GW) by LIGO was a major milestone in the history of astronomy\u00a0[@Abbott:2016blz]. Achieving the necessary strain sensitivity of the instruments was a memorable technological accomplishment [@TheLIGOScientific:2016agk], while determining the required waveforms was an astonishing success of the physics community\u00a0[@Brugmann366]. Despite these great achievements, the detection of the signals remains a challenge: the signals are still at low signal to noise ratios and must be extracted from the data using advanced statistical techniques and signal processing.\n\nThe most sensitive gravitational wave data searches rely on matched-filtering techniques\u00a0[@Allen:2005fk; @Usman:2015kfa; @Sachdev:2019vvd]. These are based on comparing the data with a class of signals determined in a specified theory. Such techniques are robust and very sensitive if the source waveform is accurately predicted. For signals with an uncertain modelling, more model-independent, although less sensitive, searches are necessary\u00a0[@Klimenko:2015ypf; @Lynch:2015yin; @LIGOScientific:2019fpa; @Salemi:2019uea; @Tsang:2019zra; @Edelman:2020aqj]. The coherent wave approach is particularly suitable in these cases. Its flexibility enables it to" +"---\nabstract: 'This document outlines a tutorial to get started with medical image registration using the open-source package DeepReg. The basic concepts of medical image registration are discussed, linking \u201cclassical\" methods to newer methods using deep learning. Two iterative, classical algorithms using optimisation and one learning-based algorithm using deep learning are coded step-by-step using DeepReg utilities, all with real, open-accessible, medical data.'\nauthor:\n- Nina Montana Brown\n- Yunguan Fu\n- Shaheer Saeed\n- Adria Casamitjana\n- 'Zachary M. C. Baum'\n- Remi Delaunay\n- Qianye Yang\n- Alexander Grimwood\n- Zhe Min\n- Ester Bonmati\n- Tom Vercauteren\n- 'Matthew J. Clarkson'\n- Yipeng Hu\ntitle: 'Introduction To Medical Image Registration with DeepReg, Between Old and New'\n---\n\nObjective of the Tutorial\n=========================\n\nThis tutorial introduces a new open-source project [DeepReg](https://github.com/DeepRegNet/DeepReg), currently based on the latest release of TensorFlow 2. This package is designed to accelerate research in image registration using parallel computing and deep learning by providing simple, tested entry points to pre-designed networks for users to get a head start with. Additionally, DeepReg provides more basic functionalities such as custom TensorFlow layers which allow more seasoned researchers to build more complex functionalities.\n\nA previous MICCAI workshop learn2reg" +"=1\n\nIntroduction\n============\n\nIn Section\u00a0\\[sec: graphs\\], we describe the graph-theoretic framework for the investigation of the algebraic information contained in the topology of scalar Feynman diagrams. Perturbative quantum field theories possess an inherent algebraic structure, which underlies the combinatorics of recursion governing renormalisation theory, and are thus deeply connected to the theory of graphs.\n\n=1 In Section\u00a0\\[sec: geometry\\], we broadly review preliminary notions in algebraic geometry and algebraic topology. An algebraic variety over $\\mathbb{Q}$ gives rise to two distinct rational structures via algebraic de Rham cohomology and Betti cohomology, which are compatible with each other only after complexification. The coexistence of these two cohomologies and their peculiar compatibility are linked to a specific class of complex numbers, known as periods. The cohomology of an\u00a0algebraic variety is equipped with two filtrations, and the mixed Hodge structure arising from their interaction constitutes the bridge between the theory of periods and the theory of motives.\n\nIn Section\u00a0\\[sec: periods\\], we introduce the set of periods, lying between $\\bar{\\mathbb{Q}}$ and $\\mathbb{C}$, among which are the numbers that come from evaluating parametric Feynman integrals, and we briefly review their remarkable properties. Suitable cohomological structures are exploited to derive non-trivial information about these" +"---\nabstract: 'We construct a microscopic model to study discrete randomness in bistable systems coupled to an environment comprising many degrees of freedom. A quartic double well is bilinearly coupled to a finite number $N$ of harmonic oscillators. Solving the time-reversal invariant Hamiltonian equations of motion numerically, we show that for $N = 1$, the system exhibits a transition with increasing coupling strength from integrable to chaotic motion, following the KAM scenario. Raising $N$ to values of the order of 10 and higher, the dynamics crosses over to a quasi-relaxation, approaching either one of the stable equilibria at the two minima of the potential. We corroborate the irreversibility of this relaxation on other characteristic timescales of the system by recording the time dependences of autocorrelation, partial entropy, and the frequency of jumps between the wells as functions of $N$ and other parameters. Preparing the central system in the unstable equilibrium at the top of the barrier and the bath in a random initial state drawn from a Gaussian distribution, symmetric under spatial reflection, we demonstrate that the decision whether to relax into the left or the right well is determined reproducibly by residual asymmetries in the initial positions and momenta" +"---\nabstract: 'Using concomitantly the Generalized Second Law of black hole thermodynamics and the holographic Bekenstein entropy bound embellished by Loop Quantum Gravity corrections to quantum black hole entropy, we show that the boundary cross-sectional area of the post-merger remnant formed from the compact binary merger in gravitational wave detection experiments like GW150914 [*et. seq.*]{}, by the LIGO-VIRGO collaboration, is bounded from below. This lower bound is more general than the bound obtained from application of Hawking\u2019s classical area theorem for black holes, since it does not depend on whether the inspiralling compact binary pair or the postmerger remnant consists of black holes or other exotic compact objects. The derivation of the bound entails an estimate of the entropy of the gravitational waves emitted during the binary merger which adapts to gravitational waves an extant formalism proposed originally for particle ensembles. The results for the minimal cross-sectional area of the merger remnant due to binary compact mergers observed recently by the LIGO-VIRGO collaboration are discussed. While accurate measurement of the mass of the remnant for the BNS merger GW170817 remains a challenge, we provide a [*proof of principle*]{} that for BNS mergers our lower bound on the cross-sectional area of" +"---\nabstract: 'In this paper, we present a novel low-light image enhancement method called dark region-aware low-light image enhancement (DALE), where dark regions are accurately recognized by the proposed visual attention module and their brightness are intensively enhanced. Our method can estimate the visual attention in an efficient manner using super-pixels without any complicated process. Thus, the method can preserve the color, tone, and brightness of original images and prevents normally illuminated areas of the images from being saturated and distorted. Experimental results show that our method accurately identifies dark regions via the proposed visual attention, and qualitatively and quantitatively outperforms state-of-the-art methods.'\nbibliography:\n- 'egbib.bib'\ntitle: 'DALE : Dark Region-Aware Low-light Image Enhancement'\n---\n\nIntroduction {#sec:intro}\n============\n\nReal-world images for outdoor scenes typically contain low-light areas, especially if the images are captured during nighttime or there exists backlit. However, using these low-light images, conventional computer vision algorithms ([*e*.*g*., ]{}object detection and tracking) cannot produce accurate results, because low-light regions cause images to lose local details and significantly reduce image quality. Therefore, low-light image enhancement is essential to prevent conventional computer vision algorithms from degrading their performance.\n\nLow-light image enhancement has a long history. For example, Pizer [*et al*. ]{}" +"---\nabstract: 'Quasicrystals lack translational symmetry, but can still exhibit long-ranged order, promoting them to candidates for unconventional physics beyond the paradigm of crystals. Here, we apply a real-space functional renormalization group approach to the prototypical quasicrystalline Penrose tiling Hubbard model treating competing electronic instabilities in an unbiased, beyond-mean-field fashion. Our work reveals a delicate interplay between charge and spin degrees of freedom in quasicrystals. Depending on the range of interactions and hopping amplitudes, we unveil a rich phase diagram including antiferromagnetic orderings, charge density waves and subleading, superconducting pairing tendencies. For certain parameter regimes we find a competition of phases, which is also common in crystals, but additionally encounter phases coexisting in a spatially separated fashion and ordering tendencies which mutually collaborate to enhance their strength. We therefore establish that quasicrystalline structures open up a route towards this rich ordering behavior uncommon to crystals and that an unbiased, beyond-mean-field approach is essential to describe this physics of quasicrystals correctly.'\nauthor:\n- 'J.B.\u00a0Hauck'\n- 'C.\u00a0Honerkamp'\n- 'S.\u00a0Achilles'\n- 'D.M.\u00a0Kennes'\nbibliography:\n- '2ndrev\\_MAIN.bib'\ntitle: |\n Electronic instabilities in Penrose quasicrystals:\\\n competition, coexistence and collaboration of order\n---\n\n*Introduction. \u2014* The discovery of quasicrystals has triggered exciting, pioneering" +"---\nabstract: 'We consider $f(R)$ gravity theories in the presence of a scalar field minimally coupled to gravity with a self-interacting potential. When the scalar field backreacts to the metric we find at large distances scalarized Schwarzschild-AdS and Schwarzschild-AdS-like black hole solutions. At small distances due to strong curvature effects and the scalar dynamis we find a rich structure of scalarized black hole solutions. When the scalar field is conformally coupled to gravity we also find scalarized black hole solutions at small distances.'\nauthor:\n- 'Zi-Yu Tang'\n- Bin Wang\n- Thanasis Karakasis\n- Eleftherios Papantonopoulos\ntitle: 'Curvature Scalarization of Black Holes in $f(R)$ Gravity '\n---\n\nIntroduction\n============\n\nThe study of black hole solutions with scalar hair is a very interesting aspect of General Relativity (GR) and had attracted a lot of interest. These hairy black holes are solutions generated from a modified Einstein-Hilbert action in which a scalar field coupled to gravity is introduced. However, these solutions in order the scalar field to be regular on the horizon and well behaved at large distances have to obey the powerful no-hair theorems. The first hairy black hole solutions of GR were found in asymptotically flat spacetimes [@BBMB] but it" +"---\nabstract: 'Shepherding involves herding a swarm of agents\u00a0(*sheep*) by another a control agent\u00a0(*sheepdog*) towards a goal. Multiple approaches have been documented in the literature to model this behaviour. In this paper, we present a modification to a well-known shepherding approach, and show, via simulation, that this modification improves shepherding efficacy. We then argue that given complexity arising from obstacles laden environments, path planning approaches could further enhance this model. To validate this hypothesis, we present a 2-stage evolutionary-based path planning algorithm for shepherding a swarm of agents in 2D environments. In the first stage, the algorithm attempts to find the best path for the sheepdog to move from its initial location to a strategic driving location behind the sheep. In the second stage, it calculates and optimises a path for the sheep. It does so by using *way points* on that path as the sequential sub-goals for the sheepdog to aim towards. The proposed algorithm is evaluated in obstacle laden environments via simulation with further improvements achieved.'\nauthor:\n- \ntitle: Path Planning for Shepherding a Swarm in a Cluttered Environment using Differential Evolution\n---\n\nDifferential Evolution, Path Planning, Shepherding, Swarm Guidance.\n\nIntroduction {#sec:introduction}\n============\n\nShepherding refers to" +"---\nabstract: |\n Given a graph $G$, the $k$-mixing problem asks: Starting with a $k$-colouring of $G$, can one obtain all $k$-colourings of $G$ by changing the colour of only one vertex at a time, while at each step maintaining a $k$-colouring? More generally, for a graph $H$, the $H$-mixing problem asks: Can one obtain all homomorphisms $G \\to H$, starting from one homomorphism $f$, by changing the image of only one vertex at a time, while at each step maintaining a homomorphism $G \\to H$?\n\n This paper focuses on a generalization of $k$-colourings, namely $(p,q)$-circular colourings. We show that when $2 < \\frac{p}{q} < 4$, a graph $G$ is $(p,q)$-mixing if and only if for any $(p,q)$-colouring $f$ of $G$, and any cycle $C$ of $G$, the wind of the cycle under the colouring equals a particular value (which intuitively corresponds to having no wind). As a consequence we show that $(p,q)$-mixing is closed under a restricted homomorphism called a fold. Using this, we deduce that $(2k+1,k)$-mixing is co-NP-complete for all $k \\in \\mathbb{N}$, and by similar ideas we show that if the circular chromatic number of a connected graph $G$ is $\\frac{2k+1}{k}$, then $G$ folds to $C_{2k+1}$. We" +"---\nabstract: 'This article focuses on the development of high-order energy stable schemes for the multi-length-scale incommensurate phase-field crystal model which is able to study the phase behavior of aperiodic structures. These high-order schemes based on the scalar auxiliary variable (SAV) and spectral deferred correction (SDC) approaches are suitable for the $L^2$ gradient flow equation, *i.e.*, the Allen-Cahn dynamic equation. Concretely, we propose a second-order Crank-Nicolson (CN) scheme of the SAV system, prove the energy dissipation law, and give the error estimate in the almost periodic function sense. Moreover, we use the SDC method to improve the computational accuracy of the SAV/CN scheme. Numerical results demonstrate the advantages of high-order numerical methods in numerical computations and show the influence of length-scales on the formation of ordered structures.'\ntitle: 'High-order energy stable schemes of incommensurate phase-field crystal model'\n---\n\nKai Jiang$^*$ and Wei Si\n\nIntroduction {#sec:intro}\n============\n\nAperiodic crystals, such as quasicrystals, are an important class of materials whose Fourier spectra cannot be all expressed by a set of basis vectors over the rational number field. The irrational coefficients give rise to the denseness of Fourier spectra which results in the difficulties in the theoretical study. Theoretically, a multiple characteristic" +"---\nauthor:\n- 'A.E. Bondar'\n- 'A.I. Milstein'\ntitle: 'Charge asymmetry in decays $ B\\rightarrow D\\bar DK$'\n---\n\nIntroduction\n============\n\nRecently, at LHC seminar at CERN\u00a0[@CERN_Seminar; @LHCb], the LHCb collaboration presented preliminary results of amplitude analysis of the decay $B\\rightarrow D^+ D^-K^+$. General attention was drawn to the presence of a peak at an energy 2.9\u00a0GeV in the distribution over the invariant mass of $D^-K^+$, Fig.\u00a0\\[fig:LHCb\\_MDK\\]. In the short time since the presentation, many articles have appeared offering different interpretations of this phenomenon\u00a0[@karliner; @Mei; @Xiao; @Xiao1; @Jian; @Qi; @Ming; @Hua; @Jun; @Zhi; @Yin].\n\n![Distribution over the invariant mass of $ D^-K^+ $ in the decay $ B^+ \\rightarrow\n D^+D^-K^+ $ in the LHCb\u00a0[@CERN_Seminar] data. The dots show the data, the curves show the resulting fit function and the contributions of the individual components of the model.[]{data-label=\"fig:LHCb_MDK\"}](LHCb_plot_MDK.pdf){width=\"70.00000%\"}\n\nThese interpretations are based on the hypotheses on the production of a compact $\\bar c\\bar s ud$ tetraquark, $D^*K^*$ molecules, etc. However, no one paid attention to another interesting phenomenon that is clearly manifested in the LHCb data. In the distribution over the invariant mass $D^+D^-$ (Fig.\u00a0\\[fig:LHCb\\_MDD\\]) in the decay $B^+ \\rightarrow D^+D^-K^+$, two peaks are observed, which" +"---\nabstract: 'We present a method which enables solid-state density functional theory calculations to be applied to systems of almost unlimited size. Computations of physical effects up to the micron length scale but which nevertheless depend on the microscopic details of the electronic structure, are made possible. Our approach is based on a generalization of the Bloch state which involves an additional sum over a finer grid in reciprocal space around each ${\\bf k}$-point. We show that this allows for modulations in the density and magnetization of arbitrary length on top of a lattice-periodic solution. Based on this, we derive a set of ultra long-range Kohn-Sham equations. We demonstrate our method with a sample calculation of bulk LiF subjected to an arbitrary external potential containing nearly 3500 atoms. We also confirm the accuracy of the method by comparing the spin density wave state of bcc Cr against a direct super-cell calculation starting from a random magnetization density. Furthermore, the spin spiral state of $\\gamma$-Fe is correctly reproduced and the screening by the density of a saw-tooth potential over 20 unit cells of silicon is verified.'\nauthor:\n- 'T. M\u00fcller'\n- 'S. Sharma'\n- 'E. K. U. Gross'\n- 'J. K." +"---\nabstract: 'We investigate the deflection angle in a strong deflection limit for a marginally unstable photon sphere in a general asymptotically flat, static and spherically symmetric spacetime under some assumptions to calculate observables. The deflection angle of a light ray reflected by the marginally unstable photon sphere diverges nonlogarithmically while the one reflected by a photon sphere diverges logarithmically. We apply our formula to a Reissner-Nordstr\u00f6m spacetime and Hayward spacetime.'\nauthor:\n- 'Naoki Tsukamoto${}^{1}$'\ntitle: Deflection angle of a light ray reflected by a general marginally unstable photon sphere in a strong deflection limit\n---\n\nIntroduction\n============\n\nRecently, the direct detection of gravitational waves emitted by black holes has been reported by LIGO and VIRGO Collaborations [@Abbott:2016blz; @LIGOScientific:2018mvr] and the detection of shadow of black hole candidate at center of a giant elliptical galaxy M87 has been reported by Event Horizon Telescope Collaboration\u00a0[@Akiyama:2019cqa]. From further observations, we will check more details of physics in strong gravitational fields in nature.\n\nStatic, spherically symmetric spacetimes, which describe strong gravitational fields caused by compact objects, have circular photon orbits and the set of the photon orbits is called photon (antiphoton) sphere if it is unstable (stable)\u00a0[@Perlick_2004_Living_Rev]. The property of the" +"---\nabstract: |\n Causal variance decompositions for a given disease-specific quality indicator can be used to quantify differences in performance between hospitals or health care providers. While variance decompositions can demonstrate variation in quality of care, causal mediation analysis can be used to study care pathways leading to the differences in performance between the institutions. This raises the question of whether the two approaches can be combined to decompose between-hospital variation in an outcome type indicator to that mediated through a given process (indirect effect) and remaining variation due to all other pathways (direct effect). For this purpose, we derive a causal mediation analysis decomposition of between-hospital variance, discuss its interpretation, and propose an estimation approach based on generalized linear mixed models for the outcome and the mediator. We study the performance of the estimators in a simulation study and demonstrate its use in administrative data on kidney cancer care in Ontario.\n\n [**Keywords:**]{} Causal mediation analysis, Hospital profiling, Quality indicator, Variance decomposition\nauthor:\n- Bo\u00a0Chen\n- 'Keith A.\u00a0Lawson'\n- Antonio\u00a0Finelli\n- 'Olli\u00a0Saarela[^1]'\ntitle: 'Causal Mediation Analysis Decomposition of Between-hospital Variance'\n---\n\nIntroduction {#section:intro}\n============\n\nQuality of healthcare can be compared between institutions such as hospitals or" +"---\nabstract: 'We propose a novel physical layer secret key generation method for the inter-spacecraft communication links. By exploiting the Doppler frequency shifts of the reciprocal spacecraft links as a unique secrecy source, spacecrafts aim to obtain identical secret keys from their individual observations. We obtain theoretical expressions for the key disagreement rate (KDR). Using generalized Gauss-Laguerre quadrature, we derive closed form expressions for the KDR. Through numerical studies, the tightness of the provided approximations are shown. Both the theoretical and numerical results demonstrate the validity and the practicality of the presented physical layer key generation procedure considering the security of the communication links of spacecrafts.'\nauthor:\n- \ntitle: |\n Securing the Inter-Spacecraft Links:\\\n Doppler Frequency Shift based Physical Layer Key Generation\\\n---\n\nspace network, Doppler frequency shift, inter-[spacecraft]{} link security, physical layer key generation.\n\nIntroduction\n============\n\nAs the commercial opportunities of space exploration flourish, the space networks become the next frontier in wireless communication. Thanks to the recent advances in rocket launch platforms to launch new spacecraft to the space, availability of dedicated frequency spectrum, the availability of lower complexity and smaller devices have lowered the cost of spacecraft supported services including space travels. Today mostly the small-space satellites" +"---\nauthor:\n- 'Athanasios N. Nikolakopoulos'\nbibliography:\n- 'pnas-sample.bib'\nnocite: '[@nikolakopoulos2019recwalk; @nikolakopoulos2020boosting; @nikolakopoulos2014use; @nikolakopoulos2019eigenrec; @nikolakopoulos2016multi; @nikolakopoulos2016multi]'\ntitle: 'Random Surfing Revisited: Generalizing PageRank\u2019s Teleportation Model'\n---\n\nRandom Surfer Model: A Tale of Two \u201cRemedies\u201d {#Ch:Intro:Sec:RandomSurfing}\n=============================================\n\nhe basic idea behind PageRank\u2019s approach to calculating the importance of individual nodes in a network is very intuitive. In their seminal paper Page *et al.*\u00a0[@pagerank] imagined of a *Random Surfer* of the network that jumps forever from node to node, and then, following this intuitive metaphor, they defined the overall importance of a node to be equal to *the fraction of time this random surfer spends on it, in the long run*. Underlying the definition of PageRank is the assumption that the existence of a link from a node $u$ to a node $v$ testifies the importance of node $v$. Furthermore, the amount of importance conferred to node $v$ is proportional to the importance of node $u$ and inversely proportional to the number of nodes $u$ links to. To formulate PageRank\u2019s basic idea with a mathematical model, we can construct a *row-normalized adjacency matrix* $\\mathbf{H}$, whose element $H_{uv}$ is one over the outdegree of $u$ if there is a link from $u$ to" +"---\nabstract: 'Domain translation is the task of finding correspondence between two domains. Several Deep Neural Network (DNN) models, e.g., CycleGAN and cross-lingual language models, have shown remarkable successes on this task under the unsupervised setting\u2014the mappings between the domains are learned from two independent sets of training data in both domains (without paired samples). However, those methods typically do not perform well on a significant proportion of test samples. In this paper, we hypothesize that many of such unsuccessful samples lie at the *fringe*\u2014relatively low-density areas\u2014of data distribution, where the DNN was not trained very well, and propose to perform Langevin dynamics to bring such fringe samples towards high density areas. We demonstrate qualitatively and quantitatively that our strategy, called *Langevin Cooling* ([L-Cool]{}), enhances state-of-the-art methods in image translation and language translation tasks.'\nauthor:\n- |\n Vignesh Srinivasan, Klaus-Robert M[\u00fc]{}ller$^{\\thanks{Corresponding authors: K.-R. M{\\\"u}ller, W. Samek and S. Nakajima. \\newline\n V. Srinivasan is with the Machine Learning Group, Fraunhofer Heinrich Hertz Institute,\n 10587 Berlin, Germany. \\newline (e-mail:\n vignesh.srinivasan@hhi.fraunhofer.de). \\newline \n W. Samek is with the Machine Learning Group, Fraunhofer Heinrich Hertz Institute,\n 10587 Berlin, Germany and also with BiFOLD. \n (e-mail: wojciech.samek@hhi.fraunhofer.de). \n \\newline\n K.-R. M{\\\"u}ller is with the Machine Learning Group, Technische" +"---\nabstract: 'Although solar-analog stars have been studied extensively over the past few decades, most of these studies have focused on visible wavelengths, especially those identifying solar-analog stars to be used as calibration tools for observations. As a result, there is a dearth of well-characterized solar analogs for observations in the near-infrared, a wavelength range important for studying solar system objects. We present 184 stars selected based on solar-like spectral type and VJ and VK colors whose spectra we have observed in the 0.8-4.2 micron range for calibrating our asteroid observations. Each star has been classified into one of three ranks based on spectral resemblance to vetted solar analogs. Of our set of 184 stars, we report 145 as reliable solar-analog stars, 21 as solar analogs usable after spectral corrections with low-order polynomial fitting, and 18 as unsuitable for use as calibration standards owing to spectral shape, variability, or features at low to medium resolution. We conclude that all but 5 of our candidates are reliable solar analogs in the longer wavelength range from 2.5 to 4.2 microns. The average colors of the stars classified as reliable or usable solar analogs are VJ=1.148, VH=1.418, and VK=1.491, with the entire set" +"---\nabstract: 'While there is overwhelming observational evidence of AGN-driven jets in galaxy clusters and groups, [*if*]{} and [*how*]{} the jet energy is delivered to the ambient medium remains unanswered. Here we perform very high resolution AGN jet simulations within a live, cosmologically evolved cluster with the moving mesh code [arepo]{}. We find that mock X-ray and radio lobe properties are in good agreement with observations with different power jets transitioning from FR-I to FR-II-like morphologies. During the lobe inflation phase, heating by both internal and bow shocks contributes to lobe energetics, and $\\sim 40$ per cent of the feedback energy goes into the $PdV$ work done by the expanding lobes. Low power jets are more likely to simply displace gas during lobe inflation, but higher power jets become more effective at driving shocks and heating the intracluster medium (ICM), although shocks rarely exceed $\\mathcal{M}\\sim 2-3$. Once the lobe inflation phase ceases, cluster weather significantly impacts the lobe evolution. Lower power jet lobes are more readily disrupted and mixed with the ICM, depositing up to $\\sim 70$ per cent of the injected energy, however, ultimately the equivalent of $\\simgt 50$ per cent of the feedback energy ends up as potential" +"---\nabstract: 'Effective non-parametric density estimation is a key challenge in high-dimensional multivariate data analysis. In this paper, we propose a novel approach that builds upon tensor factorization tools. Any multivariate density can be represented by its characteristic function, via the Fourier transform. If the sought density is compactly supported, then its characteristic function can be approximated, within controllable error, by a finite tensor of leading Fourier coefficients, whose size depends on the smoothness of the underlying density. This tensor can be naturally estimated from observed and possibly incomplete realizations of the random vector of interest, via sample averaging. In order to circumvent the curse of dimensionality, we introduce a low-rank model of this [*characteristic tensor*]{}, which significantly improves the density estimate especially for high-dimensional data and/or in the sample-starved regime. By virtue of uniqueness of low-rank tensor decomposition, under certain conditions, our method enables learning the true data-generating distribution. We demonstrate the very promising performance of the proposed method using several toy, measured, and image datasets.'\nauthor:\n- 'Magda\u00a0Amiridi, Nikos\u00a0Kargas, and\u00a0Nicholas D. Sidiropoulos,\u00a0 [^1]'\nbibliography:\n- 'references.bib'\ntitle: 'Low-rank Characteristic Tensor Density Estimation Part I: Foundations'\n---\n\nStatistical learning, Probability Density Function (PDF), Characteristic Function (CF)," +"---\nabstract: 'Pulsar timing arrays (PTA) have the promise to detect gravitational waves (GWs) from sources which are in a unique frequency range of $10^{-9}$- $10^{-6}$ Hz. This in turn also provides an opportunity to test the theory of general relativity in the low frequency regime. The central concept of the detection of GWs with PTA lies in measuring the time of arrival difference of the pulsar signal due to the passing of GWs i.e. the pulses get red-shifted. In this paper we provide a complete derivation of the redshift computation for all six possible polarizations of GW which arise due to the modifications to general relativity. We discuss the smoothness of the redshift and related properties at the critical point, where the GW source lies directly behind the pulsar. From our mathematical discussion we conclude that the redshift has to be split differently into polarization part (pattern functions) and interference part, to avoid discontinuities and singularities in the pattern functions. This choice of pattern functions agrees with the formula one uses for interferometers with a single detector arm. Finally, we provide a general expression which can in principle be used for pulsars and GW of any frequency without invoking" +"---\nabstract: 'In 1987, Stanley conjectured that if a centrally symmetric Cohen\u2013Macaulay simplicial complex $\\Delta$ of dimension $d-1$ satisfies $h_i(\\Delta)=\\binom{d}{i}$ for some $i\\geq 1$, then $h_j(\\Delta)=\\binom{d}{j}$ for all $j\\geq i$. Much more recently, Klee, Nevo, Novik, and Zheng conjectured that if a centrally symmetric simplicial polytope $P$ of dimension $d$ satisfies $g_i(\\partial P)=\\binom{d}{i}-\\binom{d}{i-1}$ for some $d/2\\geq i\\geq 1$, then $g_j(\\partial P)=\\binom{d}{j}-\\binom{d}{j-1}$ for all $d/2\\geq j\\geq i$. This note uses stress spaces to prove both of these conjectures.'\nauthor:\n- |\n Isabella Novik[^1]\\\n Department of Mathematics\\\n University of Washington\\\n Seattle, WA 98195-4350, USA\\\n `novik@uw.edu`\n- |\n Hailun Zheng[^2]\\\n Department of Mathematical Sciences\\\n University of Copenhagen\\\n Universitesparken 5, 2100 Copenhagen, Denmark\\\n `hz@math.ku.dk`\nbibliography:\n- 'refs.bib'\ntitle: The stresses on centrally symmetric complexes and the lower bound theorems\n---\n\nIntroduction\n============\n\nThis paper is devoted to analyzing the cases of equality in Stanley\u2019s lower bound theorems on the face numbers of centrally symmetric Cohen\u2013Macaulay complexes and centrally symmetric polytopes. All complexes considered in this paper are simplicial.\n\nIn the seventies, Stanley and Hochster (independently from each other) introduced the notion of Stanley\u2013Reisner rings and started developing their theory, see [@Hochster; @Reisner; @Stanley75; @Stanley77]. In the fifty years since, this theory has become a" +"---\nabstract: 'Gravitational wave signatures from dynamical scalar field configurations provide a compelling observational window on the early universe. Here we identify intriguing connections between dark matter and scalars fields that emit gravitational waves, either through a first order phase transition or oscillating after inflation. To study gravitational waves from first order phase transitions, we investigate a simplified model consisting of a heavy scalar coupled to a vector and fermion field. We then compute gravitational wave spectra sourced by inflaton field configurations oscillating after E-Model and T-Model inflation. Some of these gravitational wave signatures can be uncovered by the future Big Bang Observatory, although in general we find that MHz-GHz frequency gravitational wave sensitivity will be critical for discovering the heaviest dark sectors. Intriguingly, we find that scalars undergoing phase transitions, along with E-Model and T-Model potentials, can impel a late-time dark matter mass boost and generate up to Planck mass dark matter. For phase transitions and oscillating inflatons, the largest dark matter mass boosts correspond to higher amplitude stochastic gravitational wave backgrounds.'\nauthor:\n- |\n Amit Bhoonah$^\\eth$, Joseph Bramante$^{\\eth,\\dagger}$, Simran Nerval$^{\\eth}$, Ningqiang Song$^{\\eth,\\dagger}$\\\n [$^\\eth$ The Arthur B. McDonald Canadian Astroparticle Physics Research Institute and]{}\\\n [Department of Physics, Engineering Physics," +"---\nauthor:\n- Keisuke Inomata\nbibliography:\n- 'draft\\_secondary\\_scalar.bib'\ntitle: Analytic solutions of scalar perturbations induced by scalar perturbations \n---\n\nRESCEU-19/20\n\nIntroduction {#sec:intro}\n============\n\nThe inflation theory predicts that the cosmological perturbations are generated from the quantum fluctuations of the fields, which means that valuable information about the early Universe is printed on the perturbations. To reveal the history of the Universe, many authors have studied the perturbations for several decades. Based on the behavior under the spatial coordinate transformation, the cosmological perturbations can be divided into three types: scalar, vector, and tensor perturbations. In particular, scalar perturbations have been playing important roles in the study of the Universe. For example, on large scales ($\\gtrsim 1$Mpc), the amplitude and tilt of power spectrum of scalar perturbations have been determined through the observations of the large scale structure and the anisotropies of cosmic microwave background (CMB)\u00a0[@Aghanim:2018eyx]. These observational results put constraints on parameters of inflation models\u00a0[@Akrami:2018odb]. Also, scalar perturbations on small scales ($\\lesssim 1$Mpc) have been attracting a lot of interest because of their rich phenomenology. If the amplitude of the small-scale perturbations are large enough, some unique compact objects, primordial black holes (PBHs)\u00a0[@1967SvA....10..602Z; @Hawking:1971ei; @Carr:1974nx; @Carr:1975qj] and ultracompact" +"---\nabstract: 'Time-frequency representation (TFR) allowing for mode reconstruction plays a significant role in interpreting and analyzing the nonstationary signal constituted of various modes. However, it is difficult for most previous methods to handle signal modes with closely-spaced or spectrally-overlapped instantaneous frequencies (IFs) especially in adverse environments. To address this issue, we propose an enhanced TFR and mode decomposition (ETFR-MD) method, which is particularly adapted to represent and decompose multi-mode signals with close or crossing IFs under low signal-to-noise ratio (SNR) conditions. The emphasis of the proposed ETFR-MD is placed on accurate IF and instantaneous amplitude (IA) extraction of each signal mode based on short-time Fourier transform (STFT). First, we design an initial IF estimation method specifically for the cases involving crossing IFs. Further, a low-complexity mode enhancement scheme is proposed so that enhanced IFs better fit underlying IF laws. Finally, the IA extraction from signal\u2019s STFT coefficients combined with the enhanced IFs enables us to reconstruct each signal mode. In addition, we derive mathematical expressions that reveal optimal window lengths and separation interference of our method. The proposed ETFR-MD is compatible with previous related methods, thus can be regarded as a step toward a more general time-frequency representation and" +"---\nabstract: 'The obstruction set for graphs with knotless embeddings is not known, but a recent paper of Goldberg, Mattman, and Naimi indicates that it is quite large. Almost all known obstructions fall into four Triangle-Y families and they ask if there is an efficient way of finding or estimating the size of such graph families. Inspired by this question, we investigate the family size for complete multipartite graphs. Aside from three families that appear to grow exponentially, these families stabilize: after a certain point, increasing the number of vertices in a fixed part does not change family size.'\naddress:\n- 'Department of Mathematics, Union College, Bailey Hall 202, Schenectady, New York 12308'\n- 'Department of Mathematics and Statistics, California State University, Chico, Chico, CA 95929-0525'\n- ' Department of Mathematics and Computer Science, Wesleyan University, Middletown, CT 06459'\n- 'Department of Mathematics, University of Dayton, Dayton, OH, 45469'\nauthor:\n- Danielle Gregg\n- 'Thomas W.\u00a0Mattman'\n- Zachary Porat\n- George Todd\nbibliography:\n- 'MultiGraphs.bib'\ntitle: Family sizes for complete multipartite graphs\n---\n\nIntroduction\n============\n\nThis paper is inspired by the question of Goldberg et al.\u00a0[@GMN]:\n\nGiven an arbitrary graph, is there an efficient way of finding, or" +"---\nabstract: 'Recent anomalies exhibited by satellites and rocket bodies have highlighted that a population of faint debris exists at geosynchronous (GEO) altitudes, where there are no natural removal mechanisms. Despite previous optical surveys probing to around 10\u201320cm in size, regular monitoring of faint sources at GEO is challenging, thus our knowledge remains sparse. It is essential that we continue to explore the faint debris population using large telescopes to better understand the risk posed to active GEO satellites. To this end, we present photometric results from a survey of the GEO region carried out with the 2.54m Isaac Newton Telescope in La Palma, Canary Islands. We probe to 21$^\\text{st}$ visual magnitude (around 10cm, assuming Lambertian spheres with an albedo of 0.1), uncovering 129 orbital tracks with GEO-like motion across the eight nights of dark-grey time comprising the survey. The faint end of our brightness distribution continues to rise until the sensitivity limit of the sensor is reached, suggesting that the modal brightness could be even fainter. We uncover a number of faint, uncatalogued objects that show photometric signatures of rapid tumbling, many of which straddle the limiting magnitude of our survey over the course of a single exposure, posing" +"---\nabstract: 'Session-types specify communication protocols for communicating processes, and session-typed languages are often specified using substructural operational semantics given by multiset rewriting systems. We give an observed communication semantics\u00a0[@atkey_2017:_obser_commun_seman_class_proces] for a session-typed language with recursion, where a process\u2019s observation is given by its external communications. To do so, we introduce *fair executions* for multiset rewriting systems, and extract observed communications from fair process executions. This semantics induces an intuitively reasonable notion of observational equivalence that we conjecture coincides with semantic equivalences induced by denotational semantics\u00a0[@kavanagh_2020:_domain_seman_higher_v3], bisimulations\u00a0[@gommerstadt_2018:_session_typed_concur_contr], and barbed congruences\u00a0[@toninho_2015:_logic_found_session_concur_comput; @kokke_2019:_better_late_than_never] for these languages.'\nauthor:\n- Ryan Kavanagh\nbibliography:\n- 'library.bib'\ntitle: Substructural Observed Communication Semantics\n---\n\nIntroduction {#sec:introduction}\n============\n\nA proofs-as-processes correspondence between linear logic and the session-typed $\\pi$-calculus is the basis of many programming languages for message-passing concurrency\u00a0[@caires_pfenning_2010:_session_types_intuit_linear_propos; @caires_2016:_linear_logic_propos; @toninho_2011:_depen_session_types; @wadler_2014:_propos_as_session]. Session types specify communication protocols, and all communication with session-typed processes must respect these protocols. If we take seriously the idea that we can only interact with processes through session-typed communication, then the only thing we can observe about them is their communications. Indeed, timing differences in communication are not meaningful due to the non-deterministic scheduling of process reductions, and \u201cforwarding\u201d or" +"---\nabstract: |\n We analyze the sensitivity of the extremal equations that arise from the first order necessary optimality conditions of nonlinear optimal control problems with respect to perturbations of the dynamics and of the initial data. To this end, we present an abstract implicit function approach with scaled spaces. We will apply this abstract approach to problems governed by semilinear PDEs. In that context, we prove an exponential turnpike result and show that perturbations of the extremal equation\u2019s dynamics, e.g., discretization errors decay exponentially in time. The latter can be used for very efficient discretization schemes in a Model Predictive Controller, where only a part of the solution needs to be computed accurately. We showcase the theoretical results by means of two examples with a nonlinear heat equation on a two-dimensional domain.\n\n **Keywords.** Nonlinear Optimal Control, Sensitivity Analysis, Turnpike Property, Model Predictive Control\nauthor:\n- 'Lars Gr\u00fcne$^{1}$, Manuel Schaller$^{1,2}$, and Anton Schiela$^{1}$'\nbibliography:\n- 'references.bib'\ntitle: Abstract nonlinear sensitivity and turnpike analysis and an application to semilinear parabolic PDEs\n---\n\n[^1]\n\n[^2] [^3]\n\n[^4]\n\nIntroduction\n============\n\nIn this paper we provide an abstract framework for exponential sensitivity analysis of nonlinear optimal control problems with respect to perturbations of the" +"\\\n\\\n\n[ August 28, 2020 ]{}\n\n[**Abstract**]{}\n\nWe describe a theory amalgamating quantum theory and general relativity through the identification of a continuous 4-dimensional spacetime arena constructed from the substructures of a generalised multi-dimensional form for proper time. In beginning with neither a matter field content nor the geometric structure of an extended 4-dimensional spacetime the properties of these entities, together with their mutual correspondence, are derived through the constraints implied in founding the theory upon the general form of proper time alone. The geometric structure of the spacetime identified exhibits a locally degenerate description in terms of an inherent ambiguity in the apparent matter field content, in a manner consistent with the Einstein field equation of general relativity while also incorporating an intrinsic indeterminacy that is characteristic of quantum phenomena. Here the quantum properties of all non-gravitational fields arise from this composition *of* spacetime, rather than via a postulated set of rules applied *in* spacetime to be extended and applied to the gravitational field or spacetime itself. The contrast with several other approaches to quantum gravity and the manner in which this framework can incorporate the essential properties of general relativity and quantum theory, both individually and in" +"---\nauthor:\n- 'J. Syed'\n- 'Y. Wang'\n- 'H. Beuther'\n- 'J. D. Soler'\n- 'M. R. Rugel'\n- 'J. Ott'\n- 'A. Brunthaler'\n- 'J. Kerp'\n- 'M. Heyer'\n- 'R. S. Klessen'\n- 'Th. Henning'\n- 'S. C. O. Glover'\n- 'P. F. Goldsmith'\n- 'H. Linz'\n- 'J. S. Urquhart'\n- 'S. E. Ragan'\n- 'K. G. Johnston'\n- 'F. Bigiel'\nbibliography:\n- 'references.bib'\ndate: 'Received XX May XXXX; accepted 12 August 2020'\ntitle: Atomic and molecular gas properties during cloud formation\n---\n\n[Molecular clouds, which harbor the birthplaces of stars, form out of the atomic phase of the interstellar medium (ISM). To understand this transition process, it is crucial to investigate the spatial and kinematic relationships between atomic and molecular gas.]{} [We aim to characterize the atomic and molecular phases of the ISM and set their physical properties into the context of cloud formation processes.]{} [We studied the cold neutral medium (CNM) by means of self-absorption (HISA) toward the giant molecular filament GMF20.0-17.9 (distance=$3.5\\rm\\,kpc$, length $\\sim$170$\\rm\\,pc$) and compared our results with molecular gas traced by emission. We fitted baselines of HISA features to emission spectra using first and second order polynomial functions.]{} [The CNM identified" +"---\nabstract: 'In the present paper, we prove that the convergence of rectifiable chains in flat norm implies the weak convergence of associated varifolds if the limit flat chain is rectifiable and the mass converges to the mass of limit chain.'\naddress:\n- 'CHUNYAN LIU, SCHOOL OF MATHEMATICS AND STATISTICS, HUAZHONG UNIVERSITY OF SCIENCE AND TECHNOLOGY, 430074, WUHAN, P.R. CHINA'\n- 'YANGQIN FANG, SCHOOL OF MATHEMATICS AND STATISTICS, HUAZHONG UNIVERSITY OF SCIENCE AND TECHNOLOGY, 430074, WUHAN, P.R. CHINA'\n- 'NING ZHANG, SCHOOL OF MATHEMATICS AND STATISTICS, HUAZHONG UNIVERSITY OF SCIENCE AND TECHNOLOGY, 430074, WUHAN, P.R. CHINA'\nauthor:\n- 'Chunyan Liu, Yangqin Fang,'\n- Ning Zhang\nbibliography:\n- '1.bib'\ntitle: 'The weak convergence of varifolds generated by rectifiable flat $G$-chains'\n---\n\nIntroduction\n============\n\nIn 1960, Federer and Fleming [@FF:1960] initially established integral currents to solve the Plateau\u2019s problem, and Fleming [@Fleming:1966] extended this theory to flat chains with coefficients in an abelian group $G$. Setting $G$ a complete normed abelian group, Fleming [@Fleming:1966] obtained that every finite mass flat $G$-chain in $\\mathbb{R}^n$ is rectifiable in case of the coefficient group $G$ is finite. Later, White [@White:1999:Acta; @White:1999:Ann] generalized this result to the case of a coefficient group without nonconstant continuous path" +"---\nauthor:\n- 'Elina Fuchs,'\n- 'Oleksii Matsedonskyi,'\n- 'Inbar Savoray,'\n- Matthias Schlaffer\nbibliography:\n- 'RelaxionCollider.bib'\ntitle: Collider searches of scalar singlets across lifetimes \n---\n\nIntroduction {#sec:introduction}\n============\n\nLight spin-zero singlets are ubiquitous in models of . They can have important phenomenological roles such as serving as a portal to a Dark Sector\u00a0[@Patt:2006fw] and rendering the electroweak phase transition first order to enable electroweak baryogenesis\u00a0[@Anderson:1991zb; @Espinosa:1993bs]. In many cases, the phenomenology associated with such NP can be encompassed in the minimal renormalizable extension of the obtained by adding one spin-zero singlet $\\phi$\u00a0[@OConnell:2006rsp]. We consider this model as a benchmark, assuming all other new degrees of freedom are sufficiently heavy or weakly coupled to the particles.\n\nDespite its simple setup, the singlet extension brings about a rich phenomenology related to the Higgs, by opening the exotic decay channel $h\\to\\phi\\phi$, if kinematically allowed (see [*e.g.*]{}\u00a0Ref.\u00a0[@Curtin:2013fra]), and by reducing the couplings of the Higgs boson to particles via singlet-Higgs mixing. This applies equally to scalars and pseudoscalars, though in the latter case the $\\phi$-Higgs mixing requires breaking of [$\\mathcal{CP}$]{}. The phenomenological implications reach far beyond Higgs-related observables, as the singlet inherits the couplings of the Higgs to" +"---\nabstract: 'This paper presents a novel data-driven, direct filtering approach for unknown linear time-invariant systems affected by unknown-but-bounded measurement noise. The proposed technique combines independent multistep prediction models, identified resorting to the Set Membership framework, to refine a set that is guaranteed to contain the true system output. The filtered output is then computed as the central value in such a set. By doing so, the method achieves an accurate output filtering and provides tight and minimal error bounds with respect to the true system output. To attain these results, the online solution of linear programs is required. A modified filtering approach with lower online computational cost is also presented, obtained by moving the solution of the optimization problems to an offline preliminary phase, at the cost of larger accuracy bounds. The performance of the proposed approaches are evaluated and compared with those of standard model-based filtering techniques in a numerical example.'\nauthor:\n- 'Marco\u00a0Lauricella and Lorenzo\u00a0Fagiano [^1]'\ntitle: |\n Data-driven filtering for linear systems using\\\n Set Membership multistep predictors\n---\n\nIntroduction {#s:intro}\n============\n\nIn this paper, we address the problem of output filtering for the case of linear time-invariant systems subject to unknown-but-bounded measurement disturbances. Our" +"---\nabstract: 'Measuring the F\u00f6rster resonance energy transfer (FRET) efficiency of freely diffusing single molecules provides information about the sampled conformational states of the molecules. Under equilibrium conditions, the distribution of the conformational states is independent of time, whereas it can vary over time under non-equilibrium conditions. In this work, we consider the problem of parameter inference on non-equilibrium solution-based single-molecule FRET data. With a non-equilibrium model for the conformational dynamics and a model for the conformation-dependent FRET efficiency distribution, the likelihood function could be constructed. The model parameters, such as the rate constants of the non-equilibrium conformational dynamics model and the average FRET efficiencies of the different conformational states, have been estimated from the data by maximizing the appropriate likelihood function via the Expectation-Maximization algorithm. We illustrate the likelihood method for a few simple non-equilibrium models and validated the method by simulations. The likelihood method could be applied to study protein folding, macromolecular complex formation, protein conformational dynamics and other non-equilibrium processes at the single-molecule level and in solution.'\nauthor:\n- |\n Marijn de Boer\\\n Groningen Biomolecular Sciences and Biotechnology Institute\\\n University of Groningen\\\n Groningen, The Netherlands\\\n [](mailto:marijndeboer4@gmail.com)\nbibliography:\n- 'ms.bib'\ntitle: 'Maximum likelihood analysis of non-equilibrium solution-based single-molecule" +"---\nabstract: 'Embryo quality assessment after in vitro fertilization (IVF) is primarily done visually by embryologists. Variability among assessors, however, remains one of the main causes of the low success rate of IVF. This study aims to develop an automated embryo assessment based on a deep learning model. This study includes a total of 1084 images from 1226 embryos. The images were captured by an inverted microscope at day 3 after fertilization. The images were labelled based on Veeck criteria that differentiate embryos to grade 1 to 5 based on the size of the blastomere and the grade of fragmentation. Our deep learning grading results were compared to the grading results from trained embryologists to evaluate the model performance. Our best model from fine-tuning a pre-trained ResNet50 on the dataset results in 91.79% accuracy. The model presented could be developed into an automated embryo assessment method in point-of-care settings.'\nauthor:\n- \n- \nbibliography:\n- 'embryo.bib'\ntitle: Human Blastocyst Classification after In Vitro Fertilization Using Deep Learning\n---\n\nin vitro fertilization, embryo grading, deep learning\n\nIntroduction\n============\n\nEmbryo quality plays a pivotal role in a successful IVF cycle. Embryologists assess embryo quality from the morphological appearance using direct visualization [@cummins1986formula]. There" +"---\nabstract: 'Deep neural networks has become the first choice for researchers working on algorithmic aspects of learning-to-rank. Unfortunately, it is not trivial to find the optimal setting of hyper-parameters that achieves the best ranking performance. As a result, it becomes more and more difficult to develop a new model and conduct a fair comparison with prior methods, especially for newcomers. In this work, we propose *PT-Ranking*[^1], an open-source project based on PyTorch for developing and evaluating learning-to-rank methods using deep neural networks as the basis to construct a scoring function. On one hand, PT-Ranking includes many representative learning-to-rank methods. Besides the traditional optimization framework via empirical risk minimization, adversarial optimization framework is also integrated. Furthermore, PT-Ranking\u2019s modular design provides a set of building blocks that users can leverage to develop new ranking models. On the other hand, PT-Ranking supports to compare different learning-to-rank methods based on the widely used datasets (e.g., MSLR-WEB30K, Yahoo!LETOR and Istella LETOR) in terms of different metrics, such as precision, MAP, nDCG, nERR. By randomly masking the ground-truth labels with a specified ratio, PT-Ranking allows to examine to what extent the ratio of unlabelled query-document pairs affects the performance of different learning-to-rank methods. We further" +"---\nabstract: 'This paper investigates the problem of bidirectional energy exchange between electric vehicles (EVs) and road lanes embedded with wireless power transfer technologies called wireless charging-discharging lanes (WCDLs). As such, EVs could provide better services to the grid, especially for balancing the supply-demand, while bringing convenience for EV users, because no cables and EV stops are needed. To enable this EV\u2013WCDL energy exchange, a novel decentralized peer-to-peer (P2P) trading mechanism is proposed, in which EVs directly negotiate with a WCDL to reach consensus on the energy price and amounts to be traded. Those energy price and amounts are solutions of an optimization problem aiming at optimizing private cost functions of EVs and WCDL. The negotiation process between EVs and WCDL is secured by a privacy-preserving consensus mechanism. Further, to assure successful trading with desired energy price and amounts, an analytical and systematic method is proposed to select cost function parameters by EVs and WCDL in a fully decentralized manner. Simulations are then carried out to validate developed theoretical results, which confirm the effectiveness and scalability of the proposed algorithm.'\nbibliography:\n- 'References.bib'\n---\n\n**Electric Vehicle \u2013 Wireless Charging-Discharging Lane Decentralized Peer-to-Peer Energy Trading**\\\n[Dinh Hoa Nguyen]{}\\\nInternational Institute for" +"---\nabstract: 'When the SGH Lagrangian based on triangle mesh is used to simulate compressible hydrodynamics, because of the stiffness of triangular mesh, the problem of physical quantity cell-to-cell spatial oscillation (also called \u201ccheckerboard oscillation\u201d) is easy to occur. A matter flow method is proposed to alleviate the oscillation of physical quantities caused by triangular stiffness. The basic idea of this method is to attribute the stiffness of triangle to the fact that the edges of triangle mesh can not do bending motion, and to compensate the effect of triangle edge bending motion by means of matter flow. Three effects are considered in our matter flow method: (1) transport of the mass, momentum and energy carried by the moving matter; (2) the work done on the element, since the flow of matter changes the specific volume of the grid element; (3) the effect of matter flow on the strain rate in the element. Numerical experiments show that the proposed matter flow method can effectively alleviate the spatial oscillation of physical quantities.'\naddress:\n- 'School of Mathematics and Computational Science, Xiangtan University, Xiangtan 411105, China'\n- 'Institute of Fluid Physics, CAEP, Mianyang 621999, China'\n- 'Department of Mechanics and Aerospace Engineering," +"---\nbibliography:\n- 'fpmm.bib'\n---\n\nIntroduction\n============\n\nSchur polynomials, named after Issai Schur, are a class of symmetric polynomials indexed by decreasing sequences of non-negative integers (partitions), which form a linear basis for the space of all symmetric polynomials; see [@IGM15]. Besides their applications in representation theory, Schur polynomials also play an important role in the study of integrable lattice models in statistical mechanics (see [@AB07; @AB11]). One example of such a model is the dimer model, or equivalently, the random tiling model; see [@CKP; @KO]. In this paper, we study the asymptotics of Schur polynomials on partitions which are almost periodic; the results are related the law of large numbers and central limit theorem for dimer configurations on contracting square-hexagon lattices. The connection between asymptotics of Schur polynomials and scaling limit of random tilings has been investigated, see [@GP15; @bg; @bg16; @BG17] for uniform perfect matchings on the hexagon lattice (random lozenge tiling); [@bk] for uniform perfect matchings on the square grid (random domino tiling); and [@BL17; @ZL18; @Li182] for periodically weighted perfect matchings on the square-hexagon lattice. This paper further develops the technique in [@BL17; @ZL18; @Li182] to study the asymptotics of Schur polynomials on more general partitions." +"---\nabstract: |\n The [*linear search*]{} problem, informally known as the [*cow path*]{} problem, is one of the fundamental problems in search theory. In this problem, an immobile target is hidden at some unknown position on an unbounded line, and a mobile searcher, initially positioned at some specific point of the line called the [*root*]{}, must traverse the line so as to locate the target. The objective is to minimize the worst-case ratio of the distance traversed by the searcher to the distance of the target from the root, which is known as the [*competitive ratio*]{} of the search.\n\n In this work we study this problem in a setting in which the searcher has a [*hint*]{} concerning the target. We consider three settings in regards to the nature of the hint: i) the hint suggests the exact position of the target on the line; ii) the hint suggests the direction of the optimal search (i.e., to the left or the right of the root); and iii) the hint is a general $k$-bit string that encodes some information concerning the target. Our objective is to study the [*Pareto*]{}-efficiency of strategies in this model. Namely, we seek optimal, or near-optimal tradeoffs between" +"---\nabstract: 'This work demonstrates that using the objective with independence assumption for modelling the span probability $P(a_s,a_e) = P(a_s)P(a_e)$ of span starting at position $a_s$ and ending at position $a_e$ has adverse effects. Therefore we propose multiple approaches to modelling joint probability $P(a_s,a_e)$ directly. Among those, we propose a compound objective, composed from the joint probability while still keeping the objective with independence assumption as an auxiliary objective. We find that the compound objective is consistently superior or equal to other assumptions in exact match. Additionally, we identified common errors caused by the assumption of independence and manually checked the counterpart predictions, demonstrating the impact of the compound objective on the real examples. Our findings are supported via experiments with three extractive QA models (BIDAF, BERT, ALBERT) over six datasets and our code, individual results and manual analysis are available online[^1].'\nauthor:\n- |\n Martin Fajcik, Josef Jon, Pavel Smrz\\\n Brno University of Technology\\\n [{ifajcik,ijon,smrz}@fit.vutbr.cz]{}\nbibliography:\n- 'anthology.bib'\n- 'custom.bib'\ntitle: Rethinking the Objectives of Extractive Question Answering\n---\n\n=1\n\nIntroduction\n============\n\nThe goal of extractive question answering (EQA) is to find the span boundaries \u2013 the start and the end of the span from text evidence, which answers" +"---\nabstract: 'This paper investigates whether in frictional granular packings, like in Hamiltonian amorphous elastic solids, the stress autocorrelation matrix presents long range anisotropic contributions just as elastic Green\u2019s functions. We find that in a standard model of frictional granular packing this is not the case. We prove quite generally that mechanical balance and material isotropy constrain the stress auto-correlation matrix to be fully determined by two spatially isotropic functions: the pressure and torque auto-correlations. The pressure and torque fluctuations being respectively normal and hyper-uniform force the stress autocorrelation to decay as the elastic Green\u2019s function. Since we find the torque fluctuations to be hyper-uniform, the culprit is the pressure whose fluctuations decay slower than normally as a function of the system\u2019s size. Investigating the reason for these abnormal pressure fluctuations we discover that anomalous correlations build up already during the compression of the dilute system before jamming. Once jammed these correlations remain frozen. Whether this is true for frictional matter in general or is it the consequence of the model properties is a question that must await experimental scrutiny and possible alternative models.'\nauthor:\n- 'Ana\u00ebl Lema\u00eetre$^1$, Chandana Mondal$^2$, Itamar Procaccia$^{2,3}$ and Saikat Roy$^2$'\nbibliography:\n- 'All.bib'\ntitle: Stress" +"---\nabstract: 'Ultrafast revolutionized biomedical imaging with its capability of acquiring full-view frames at over , unlocking breakthrough modalities such as shear-wave elastography and functional neuroimaging. Yet, it suffers from strong diffraction artifacts, mainly caused by , , or . Multiple acquisitions are typically required to obtain a sufficient image quality, at the cost of a reduced frame rate. To answer the increasing demand for high-quality imaging from single unfocused acquisitions, we propose a two-step -based image reconstruction method, compatible with real-time imaging. A low-quality estimate is obtained by means of a backprojection-based operation, akin to conventional beamforming, from which a high-quality image is restored using a residual with multiscale and multichannel filtering properties, trained specifically to remove the diffraction artifacts inherent to ultrafast imaging. To account for both the and the oscillating properties of images, we introduce the as a training loss function. Experiments were conducted with a linear transducer array, in single plane-wave () imaging. Trainings were performed on a simulated dataset, crafted to contain a wide diversity of structures and echogenicities. Extensive numerical evaluations demonstrate that the proposed approach can reconstruct images from single with a quality similar to that of gold-standard imaging, on a dynamic range" +"---\nabstract: |\n Answering connectivity queries in semi-algebraic sets is a long-standing and challenging computational issue with applications in robotics, in particular for the analysis of kinematic singularities. One task there is to compute the number of connected components of the complementary of the singularities of the kinematic map. Another task is to design a continuous path joining two given points lying in the same connected component of such a set. In this paper, we push forward the current capabilities of computer algebra to obtain computer-aided proofs of the analysis of the kinematic singularities of various robots used in industry.\n\n We first show how to combine mathematical reasoning with easy symbolic computations to study the kinematic singularities of an infinite family (depending on paramaters) modelled by the UR-series produced by the company \u201cUniversal Robots\u201d. Next, we compute roadmaps (which are curves used to answer connectivity queries) for this family of robots. We design an algorithm for \u201csolving\u201d positive dimensional polynomial system depending on parameters. The meaning of solving here means partitioning the parameter\u2019s space into semi-algebraic components over which the number of connected components of the semi-algebraic set defined by the input system is invariant. Practical experiments confirm our computer-aided" +"---\nabstract: 'Automatic Defect Analysis and Qualification (ADAQ) is a collection of automatic workflows developed for high-throughput simulations of magneto-optical properties of point defects in semiconductors. These workflows handle the vast number of defects by automating the processes to relax the unit cell of the host material, construct supercells, create point defect clusters, and execute calculations in both the electronic ground and excited states. The main outputs are the magneto-optical properties which include zero-phonon lines, zero-field splitting, and hyperfine coupling parameters. In addition, the formation energies are calculated. We demonstrate the capability of ADAQ by performing a complete characterization of the silicon vacancy in silicon carbide in the polytype 4H (4H-SiC).'\nauthor:\n- Joel Davidsson\n- Viktor Iv\u00e1dy\n- Rickard Armiento\n- 'Igor A. Abrikosov'\nbibliography:\n- 'references.bib'\ntitle: 'ADAQ: Automatic workflows for magneto-optical properties of point defects in semiconductors'\n---\n\nIntroduction\n============\n\nPoint defects in wide-bandgap semiconductors have spanned a wide range of applications, including but not limited to qubit realizations\u00a0 [@Childress:Science2006; @Jelezko:PSS2006; @Jacques2009; @Hanson:Nature2008; @Awschalom2013], biosensors\u00a0[@mcguinness2011quantum; @Kucsko2013; @Balasubramanian:Nature2008], accurate chemical sensors\u00a0[@aslam2017nanoscale], nanoscale electric field and strain sensors\u00a0[@Falk2014], and nano thermometers\u00a0[@anisimov2016optical]. Most of these applications have been realized with the NV center in diamond\u00a0[@Davies:PRSLA1976;" +"---\nabstract: |\n The Gamma-Ray Burst Monitor (GBM) on the [*Fermi Gamma-Ray Space Telescope*]{}, for the first time, detected a short gamma ray burst (SGRB) signal that accompanies a gravitational wave signal GW170817 in 2017. The detection and localization of the gravitational wave and gamma-ray source led all other space- and ground-based observatories to measure its kilonova and afterglow across the electromagnetic spectrum, which started a new era in astronomy, the so-called multi-messenger astronomy. Therefore, localizations of short gamma-ray bursts, as counterparts of verified gravitational waves, is of crucial importance since this will allow observatories to measure the kilonovae and afterglows associated with these explosions. Our results show that, an automated network of observatories, such as the Stellar Observations Network Group (SONG), can be coupled with an interconnected multi-hop array of CubeSats for transients (IMPACT) to localize SGRBs. IMPACT is a mega-constellation of $\\sim$80 CubeSats, each of which is equipped with gamma-ray detectors with ultra-high temporal resolution to conduct full sky surveys in an energy range of 50-300 keV and downlink the required data promptly for high accuracy localization of the detected SGRB to a ground station. Additionally, we analyze propagation and transmission delays from receipt of a SGRB signal" +"---\nabstract: |\n We demonstrate through experiments and numerical simulations that low-density, low-loss, meter-scale plasma channels can be generated by employing a conditioning laser pulse to ionize the neutral gas collar surrounding a hydrodynamic optical-field-ionized (HOFI) plasma channel. We use particle-in-cell simulations to show that the leading edge of the conditioning pulse ionizes the neutral gas collar to generate a deep, low-loss plasma channel which guides the bulk of the conditioning pulse itself as well as any subsequently injected pulses. In proof-of-principle experiments we generate conditioned HOFI (CHOFI) waveguides with axial electron densities of $n_\\mathrm{e0} \\approx 1 \\times 10^{17} \\; \\mathrm{cm^{-3}}$, and a matched spot size of $26 \\; \\mathrm{\\mu m}$. The power attenuation length of these CHOFI channels was calculated to be $L_\\mathrm{att} = (21 \\pm 3) \\; \\mathrm{m}$, more than two orders of magnitude longer than achieved by HOFI channels. Hydrodynamic and particle-in-cell simulations demonstrate that meter-scale CHOFI waveguides with attenuation lengths exceeding 1 m could be generated with a total laser pulse energy of only $1.2$ J per meter of channel. The properties of CHOFI channels are ideally suited to many applications in high-intensity light-matter interactions, including multi-GeV plasma accelerator stages operating at high pulse repetition rates." +"---\nabstract: 'COVID-19 outbreaks have proven to be very difficult to isolate and extinguish before they spread out. An important reason behind this might be that epidemiological barriers consisting in stopping symptomatic people are likely to fail because of the contagion time before onset, mild cases and/or asymptomatics carriers. Motivated by these special COVID-19 features, we study a scheme for containing an outbreak in a city that consists in adding an extra firewall block between the outbreak and the rest of the city. We implement a coupled compartment model with stochastic noise to simulate a localized outbreak that is partially isolated and analyze its evolution with and without firewall for different plausible model parameters. We explore how further improvements could be achieved if the epidemic evolution would trigger policy changes for the flux and/or lock-down in the different blocks. Our results show that a substantial improvement is obtained by merely adding an extra block between the outbreak and the bulk of the city.'\nbibliography:\n- 'biblio.bib'\n---\n\nICAS 052/20\n\n[**Containing COVID-19 outbreaks using a firewall**]{}\\\n\\\n[*$^{(a)}$ International Center for Advanced Studies (ICAS), UNSAM and CONICET,\\\nCampus Miguelete, 25 de Mayo y Francia, (1650) Buenos Aires, Argentina* ]{}\\\n[*$^{(b)}$ Centro" +"---\nabstract: 'Differential geometry offers a powerful framework for optimising and characterising finite-time thermodynamic processes, both classical and quantum. Here, we start by a pedagogical introduction to the notion of thermodynamic length. We review and connect different frameworks where it emerges in the quantum regime: adiabatically driven closed systems, time-dependent Lindblad master equations, and discrete processes. A geometric lower bound on entropy production in finite-time is then presented, which represents a quantum generalisation of the original classical\u00a0bound. Following this, we review and develop some general principles for the optimisation of thermodynamic processes in the linear-response regime. These include constant speed of control variation according to the thermodynamic metric, absence of quantum coherence, and optimality of small cycles around the point of maximal ratio between heat capacity and relaxation time for Carnot\u00a0engines.'\nauthor:\n- Paolo Abiuso\n- 'Harry J.\u00a0D. Miller'\n- 'Mart\u00ed Perarnau-Llobet'\n- Matteo Scandi\ntitle: Geometric optimisation of quantum thermodynamic processes\n---\n\nIntroduction\n============\n\nQuasistatic processes can be successfully characterised by a few simple and universal results: work is given by the equilibrium free energy difference between the endpoints of a transformation, the\u00a0efficiency of a Carnot engine depends only on the temperatures of the thermal" +"---\nabstract: 'Characterizing users\u2019 interests accurately plays a significant role in an effective recommender system. The sequential recommender system can learn powerful hidden representations of users from successive user-item interactions and dynamic users\u2019 preferences. To analyze such sequential data, the use of self-attention mechanisms and bidirectional neural networks have gained much attention recently. However, there exists a common limitation in previous works that they only model the user\u2019s main purposes in the behavioral sequences separately and locally, lacking the global representation of the user\u2019s whole sequential behavior. To address this limitation, we propose a novel bidirectional sequential recommendation algorithm that integrates the user\u2019s local purposes with the global preference by additive supervision of the matching task. Particularly, we combine the mask task with the matching task in the training process of the bidirectional encoder. A new sample production method is also introduced to alleviate the effect of mask noise. Our proposed model can not only learn bidirectional semantics from users\u2019 behavioral sequences but also explicitly produces user representations to capture user\u2019s global preference. Extensive empirical studies demonstrate our approach considerably outperforms various baseline models.'\nauthor:\n- 'Lingxiao Zhang()'\n- Jiangpeng Yan\n- Yujiu Yang\n- 'Li Xiu()'\ntitle: 'Match4Rec: A" +"---\nabstract: 'In this work we propose a new, arbitrary order space-time finite element discretisation for Hamiltonian PDEs in multisymplectic formulation. We show that the new method which is obtained by using both continuous and discontinuous discretisations in space, admits a local and global conservation law of energy. We [also]{} show existence and uniqueness of [solutions]{} of the discrete equations. Further, we illustrate the error behaviour and the conservation properties of the proposed discretisation in extensive numerical experiments on the linear and nonlinear wave equation and on the nonlinear Schr[\u00f6]{}dinger equation.'\naddress:\n- ' Elena Celledoni [^1]'\n- ' James Jackaman [^2]'\nauthor:\n- Elena Celledoni\n- James Jackaman\nbibliography:\n- 'multisym.bib'\ntitle: Discrete conservation laws for finite element discretisations of multisymplectic PDEs\n---\n\nIntroduction {#sec:introduction}\n============\n\nFinite element discretisations of space-time variational problems have seen a revival of interest in the recent literature [@henning19mor; @perugia20tpa; @antonietti18aho; @urban14aie], with their origin going back to the work of [@lions68pal; @AzizMonk:1989]. The focus of the present work is the structure-preserving discretisation of variational problems using finite element methods. The point of departure is the variational space-time formulation of PDE problems arising as the Euler-Lagrange equations of a space-time action functional. Formally via" +"---\nabstract: '[^1]Understanding dynamic human mobility changes and spatial interaction patterns at different geographic scales is crucial for assessing the impacts of non-pharmaceutical interventions (such as stay-at-home orders) during the COVID-19 pandemic. In this data descriptor, we introduce a regularly-updated multiscale dynamic human mobility flow dataset across the United States, with data starting from March 1st, 2020. By analysing millions of anonymous mobile phone users\u2019 visits to various places provided by SafeGraph, the daily and weekly dynamic origin-to-destination (O-D) population flows are computed, aggregated, and inferred at three geographic scales: census tract, county, and state. There is high correlation between our mobility flow dataset and openly available data sources, which shows the reliability of the produced data. Such a high spatiotemporal resolution human mobility flow dataset at different geographic scales over time may help monitor epidemic spreading dynamics, inform public health policy, and deepen our understanding of human behaviour changes under the unprecedented public health crisis. This up-to-date O-D flow open data can support many other social sensing and transportation applications.'\nauthor:\n- 'Yuhao Kang^1^, Song Gao^1[\\*]{}^, Yunlei Liang^1^, Mingxiao Li^1,2,3^, Jinmeng Rao^1^, Jake Kruse^1^'\nbibliography:\n- 'references.bib'\ndate: 'August, 2020'\ntitle: 'Multiscale dynamic human mobility flow dataset in the" +"---\nabstract: 'We propose topological semimetals generated by the square-root operation for tight-binding models in two and three dimensions, which we call square-root topological semimetals. The square-root topological semimetals host topological band touching at finite energies, whose topological protection is inherited from the squared Hamiltonian. Such a topological character is also reflected in emergence of boundary modes with finite energies. Specifically, focusing on topological properties of squared Hamiltonian in class AIII, we reveal that a decorated honeycomb (decorated diamond) model hosts finite-energy Dirac cones (nodal lines). We also propose a realization of a square-root topological semimetal in a spring-mass model, where robustness of finite-energy Dirac points against the change of tension is elucidated.'\nauthor:\n- Tomonari Mizoguchi\n- Tsuneya Yoshida\n- Yasuhiro Hatsugai\nbibliography:\n- 'SQroot\\_TSM.bib'\ntitle: 'Square-root topological semimetals'\n---\n\nIntroduction\n============\n\nIn the past decade, novel classes of topological phases have been extensively explored\u00a0[@Wen2017]. Focusing on non-interacting fermions, there are two kinds of topological phases according to the bulk spectrum. One is a gapped topological phase where bulk has an energy gap and nontrivial topological numbers are defined for Bloch or Bogoliubov bands. Examples include topological insulators (TIs)\u00a0[@Haldane1988; @Kane2005; @Kane2005_2; @Bernevig2006; @Hasan2010; @Qi2011] and topological superconductors" +"---\nabstract: 'We propose a neural-network variational quantum algorithm to simulate the time evolution of quantum many-body systems. Based on a modified restricted Boltzmann machine (RBM) wavefunction ansatz, the proposed algorithm can be efficiently implemented in near-term quantum computers with low measurement cost. Using a qubit recycling strategy, only one ancilla qubit is required to represent all the hidden spins in an RBM architecture. The variational algorithm is extended to open quantum systems by employing a stochastic Schr\u00f6dinger equation approach. Numerical simulations of spin-lattice models demonstrate that our algorithm is capable of capturing the dynamics of closed and open quantum many-body systems with high accuracy without suffering from the vanishing gradient (or \u2018barren plateau\u2019) issue for the considered system sizes.'\nauthor:\n- Chee Kong Lee\n- Pranay Patil\n- Shengyu Zhang\n- Chang Yu Hsieh\nbibliography:\n- 'qrbm.bib'\ntitle: 'A Neural-Network Variational Quantum Algorithm for Many-Body Dynamics'\n---\n\nIntroduction\n============\n\nAccurate and efficient simulation of quantum many-body dynamics remains one of the most challenging problems in physics, despite nearly a century of progress. Renewed interest has been sparked in this field due to recent experiments with Rydberg atoms[@Ryd; @Ryd2], which suggest the existence of scar states which do not thermalize." +"---\nabstract: 'We consider a game of persuasion with evidence between a sender and a receiver. The sender has private information. By presenting evidence on the information, the sender wishes to persuade the receiver to take a single action (e.g., hire a job candidate, or convict a defendant). The sender\u2019s utility depends solely on whether or not the receiver takes the action. The receiver\u2019s utility depends on both the action as well as the sender\u2019s private information. We study three natural variations. First, we consider sequential equilibria of the game without commitment power. Second, we consider a persuasion variant, where the sender commits to a signaling scheme and then the receiver, after seeing the evidence, takes the action or not. Third, we study a delegation variant, where the receiver first commits to taking the action if being presented certain evidence, and then the sender presents evidence to maximize the probability the action is taken. We study these variants through the computational lens, and give hardness results, optimal approximation algorithms, as well as polynomial-time algorithms for special cases. Among our results is an approximation algorithm that rounds a semidefinite program that might be of independent interest, since, to the best of" +"---\nabstract: 'We use geometric singular perturbation techniques combined with an action functional approach to study traveling pulse solutions in a three-component FitzHugh\u2013Nagumo model. First, we derive the profile of traveling $1$-pulse solutions with undetermined width and propagating speed. Next, we compute the associated action functional for this profile from which we derive the conditions for existence and a saddle-node bifurcation as the zeros of the action functional and its derivatives. We obtain the same conditions by using a different analytical approach that exploits the singular limit of the problem. We also apply this methodology of the action functional to the problem for traveling $2$-pulse solutions and derive the explicit conditions for existence and a saddle-node bifurcation. From these we deduce a necessary condition for the existence of traveling $2$-pulse solutions. We end this article with a discussion related to Hopf bifurcations near the saddle-node bifurcation.'\nauthor:\n- |\n Takashi\u00a0Teramoto [^1]\\\n School of Medicine,\\\n Asahikawa Medical University,\\\n Asahikawa, 078-8510, Japan\\\n `teramoto@asahikawa-med.ac.jp`\\\n Peter van\u00a0Heijster [^2]\\\n School of Mathematical Sciences,\\\n Queensland University of Technology,\\\n Brisbane, QLD 4001, Australia\ntitle: 'Traveling pulse solutions in a three-component FitzHugh\u2013Nagumo model'\n---\n\nIntroduction\n============\n\nThe study of spatially localized patterns in multi-component reaction-diffusion systems" +"---\nabstract: |\n Social media data can be a very salient source of information during crises. User-generated messages provide a window into people\u2019s minds during such times, allowing us insights about their moods and opinions. Due to the vast amounts of such messages, a large-scale analysis of population-wide developments becomes possible.\\\n In this paper, we analyze Twitter messages (tweets) collected during the first months of the COVID-19 pandemic in Europe with regard to their sentiment. This is implemented with a neural network for sentiment analysis using multilingual sentence embeddings. We separate the results by country of origin, and correlate their temporal development with events in those countries. This allows us to study the effect of the situation on people\u2019s moods. We see, for example, that lockdown announcements correlate with a deterioration of mood in almost all surveyed countries, which recovers within a short time span.\nauthor:\n- |\n Anna Kruspe\\\n German Aerospace Center (DLR)\\\n Institute of Data Science\\\n Jena, Germany\\\n `anna.kruspe@dlr.de`\\\n Matthias H\u00e4berle\\\n Technical University of Munich (TUM)\\\n Signal Processing in Earth Observation (SiPEO)\\\n Munich, Germany\\\n `matthias.haeberle@tum.de`\\\n Iona Kuhn\\\n German Aerospace Center (DLR)\\\n Institute of Data Science\\\n Jena, Germany\\\n `iona.kuhn@dlr.de`\\\n Xiao Xiang Zhu\\\n German Aerospace Center (DLR)\\\n Remote Sensing Technology" +"---\nbibliography:\n- 'bibliography.bib'\n- 'reference.bib'\n---\n\nIntroduction\n============\n\nAcademic scholars have appreciated the benefits that experimentation brings to firms for many decades [@march1991exploration; @sitkin1992learning; @sarasvathy2001causation; @thomke2001enlightened; @johari2015always; @kohavi2017surprising; @sun2018designing; @xiong2019optimal]. However, widespread adoption of the practice has only taken off in the last decade, partly fueled by the rapid cost reductions achieved by firms in the technology sector [@kohavi2007practical; @kohavi2009online; @bakshy2014designing; @azevedo2019b; @kohavi2020trustworthy]. Most large firms now possess internal tools for experimentation, and a growing number of smaller and more conventional companies are purchasing the capabilities from third-party sellers that offer full-stack integration [@thomke2020experimentation]. These tools typically allow simple \u201cA/B\u201d tests that compare the standard offering \u201cA\u201d to a new or improved version \u201cB\u201d. The comparisons are made across a range of different business outcomes, and the tests are usually conducted for at least a week [@kohavi2020trustworthy]. This simple practice has provided tremendous value to firms [@koning2019experimentation].\n\nHowever, some firms and authors have recognized the limitations of these simple A/B tests [@gupta2019top; @bojinov2020avoid]; the two most prominent being handling interference (the scenario where the assignment of one subject impacts another\u2019s outcomes) and estimating heterogeneous (or personalized) effects. For example, many online platforms and retail marketplaces often observe varying levels" +"---\nabstract: 'We conduct a thorough Bayesian analysis of the possibility that the black hole merger events seen in gravitational waves are primordial black hole (PBH) mergers. Using the latest merger rate models for PBH binaries drawn from a lognormal mass function we compute posterior parameter constraints and Bayesian evidences using data from the first two observing runs of LIGO-Virgo. We account for theoretical uncertainty due to possible disruption of the binary by surrounding PBHs, which can suppress the merger rate significantly. We also consider simple astrophysically motivated models and find that these are favoured decisively over the PBH scenario, quantified by the Bayesian evidence ratio. Paying careful attention to the influence of the parameter priors and the quality of the model fits, we show that the evidence ratios can be understood by comparing the predicted chirp mass distribution to that of the data. We identify the posterior predictive distribution of chirp mass as a vital tool for discriminating between models. A model in which all mergers are PBH binaries is strongly disfavoured compared with astrophysical models, in part due to the over-prediction of heavy systems having $\\mathcal{M}_{{\\rm chirp}} \\gtrsim 40 \\, M_\\odot$ and positive skewness over the range of" +"---\nabstract: 'We present the results of study of the Galactic candidate luminous blue variable Wray15-906, revealed via detection of its infrared circumstellar shell (of $\\approx2$pc in diameter) with the [*Wide-field Infrared Survey Explorer*]{} ([*WISE*]{}) and the [*Herschel Space Observatory*]{}. Using the stellar atmosphere code [cmfgen]{} and the [*Gaia*]{} parallax, we found that Wray15-906 is a relatively low-luminosity, $\\log(L/{\\rm\\,L_\\odot})\\approx5.4$, star of temperature of $25\\pm2$kK, with a mass-loss rate of $\\approx3\\times10^{-5} \\, {{\\rm\\,M_\\odot}\\, {\\rm yr}^{-1}}$, a wind velocity of $280\\pm50 \\, {{\\rm\\,km\\,s^{-1}}}$, and a surface helium abundance of $65\\pm2$ per cent (by mass). In the framework of single star evolution, the obtained results suggest that Wray15-906 is a post-red supergiant star with initial mass of $\\approx25 \\, {\\rm\\,M_\\odot}$ and that before exploding as a supernova it could transform for a short time into a WN11h star. Our spectroscopic monitoring with the Southern African Large Telescope (SALT) does not reveal significant changes in the spectrum of Wray15-906 during the last 8 yr, while the $V$-band light curve of this star over years 1999\u20132019 shows quasi-periodic variability with a period of $\\approx1700$d and an amplitude of $\\approx0.2$mag. We estimated the mass of the shell to be $2.9\\pm0.5 \\, {\\rm\\,M_\\odot}$ assuming the gas-to-dust mass" +"---\nabstract: 'With progress towards more compact quantum computing architectures, fundamental questions regarding the entanglement of indistinguishable particles need to be addressed. In a solid state device, this quest is naturally connected to the quantum correlations of electrons. Here, we investigate the entanglement between electrons, focusing on the entanglement of modes, the entanglement of particles and the effect of particle-number superselection rules. We elucidate the formation of mode and particle entanglement in strongly correlated materials and show that both represent important resources in quantum information tasks such as quantum teleportation. To this end, we qualitatively and quantitatively analyze the entanglement in three electronic teleportation schemes: (i) quantum teleportation within a molecule on graphene, (ii) a nitrogen-vacancy center and (iii) a quantum dot array.'\naddress:\n- 'Centre de Physique Th\u00e9orique, Ecole Polytechnique, Institut Polytechnique de Paris, 91128 Palaiseau Cedex, France'\n- 'Department of Physics and Astronomy, Materials Theory, Uppsala University, 75120 Uppsala, Sweden'\nauthor:\n- Anna Galler\n- Patrik Thunstr\u00f6m\ntitle: Orbital and electronic entanglement in quantum teleportation schemes\n---\n\nIntroduction\n============\n\nEntanglement lies at the heart of quantum mechanics and has been investigated extensively during the last decades mainly due to its importance in quantum information, cryptography and teleportation.[@ent_review_2009] The" +"---\nabstract: 'A Helly-type theorem for diameter provides a bound on the diameter of the intersection of a finite family of convex sets in $\\mathds{R}^d$ given some information on the diameter of the intersection of all sufficiently small subfamilies. We prove fractional and colorful versions of a longstanding conjecture by B\u00e1r\u00e1ny, Katchalski, and Pach. We also show that a Minkowski norm admits an exact Helly-type theorem for diameter if and only if its unit ball is a polytope and prove a colorful version for those that do. Finally, we prove Helly-type theorems for the property of \u201ccontaining $k$ colinear integer points.\u201d'\naddress:\n- 'Lawrence University, 711 E.\u00a0Boldt Way, Appleton, WI 54911'\n- 'Baruch College, City University of New York, One Bernard Baruch Way, New York, NY 10010'\nauthor:\n- Travis Dillon\n- Pablo Sober\u00f3n\ntitle: 'A m\u00e9lange of diameter Helly-type theorems'\n---\n\nIntroduction\n============\n\nHelly\u2019s theorem is one of the most prominent results on the intersection properties of families of convex sets [@Radon:1921vh; @Helly:1923wr]. It says that *if the intersection of every $d+1$ or fewer elements of a finite family of convex sets in ${\\mathds{R}}^d$ is nonempty, then the intersection of the entire family is nonempty.* This result has" +"---\nabstract: 'Many interesting experimental systems, such as cavity QED or central spin models, involve global coupling to a single harmonic mode. Out-of-equilibrium, it remains unclear under what conditions localized phases survive such global coupling. We study energy-dependent localization in the disordered Ising model with transverse and longitudinal fields coupled globally to a $d$-level system (qudit). Strikingly, we discover an inverted mobility edge, where high energy states are localized while low energy states are delocalized. Our results are supported by shift-and-invert eigenstate targeting and Krylov time evolution up to $L=13$ and $18$ respectively. We argue for a critical energy of the localization phase transition which scales as $E_c \\propto L^{1/2}$, consistent with finite size numerics. We also show evidence for a reentrant MBL phase at even lower energies despite the presence of strong effects of the central mode in this regime. Similar results should occur in the central spin-$S$ problem at large $S$ and in certain models of cavity QED.'\nauthor:\n- 'Saeed Rahmanian Koshkaki, Michael H. Kolodrubetz'\nbibliography:\n- 'references.bib'\ntitle: 'Inverted many-body mobility edge in a central qudit problem'\n---\n\n![Proposed phase diagram of inverted mobility edge for Ising model in the presence of global qudit or spin-$S$" +"---\nabstract: 'An exploratory data analysis is an essential step for every data analyst to gain insights, evaluate data quality and (if required) select a machine learning model for further processing. While privacy-preserving machine learning is on the rise, more often than not this initial analysis is not counted towards the privacy budget. In this paper, we quantify the privacy loss for basic statistical functions and highlight the importance of taking it into account when calculating the privacy-loss budget of a machine learning approach.'\nauthor:\n- Saskia Nu\u00f1ez von Voigt\n- Mira Pauli\n- Johanna Reichert\n- Florian Tschorsch\ntitle: 'Every Query Counts: Analyzing the Privacy Loss of Exploratory Data Analyses'\n---\n\nIntroduction\n============\n\nOne of the most prevalent barriers of machine learning involve data management in general and information security and privacy in particular. This is especially relevant for sensitive data sets that, for example, include medical and financial data items. In order to overcome the barriers, the area of gained attention\u00a0[@Al-RubaieC19; @MohasselZ17]. It is concerned with providing an infrastructure for secure and privacy-preserving data access as well as privacy-preserving model generation.\n\nWhile reduces the risk of data leaks, particularly the risk of model inversion attacks, one aspect" +"---\nabstract: 'We present the design of a framework to describe parametrized exercise tasks on Haskell-I/O programming. Parametrized tasks can be instantiated randomly to quickly generate different instances of a task. Such automatic task generation is useful in many different ways. Manual task creation can be a time-consuming process, so formulating a task design once and then automatically generating different variations can save valuable time for the educator. The descriptions of tasks also serve as easy to understand documentation and can be reused in new task designs. On the student\u2019s side automatic task generation, together with an automated assessment system, enables practicing on as many fresh exercise tasks as needed. Students can also each be given a slightly different version of tasks, reducing issues regarding plagiarism arising naturally in an e-learning environment. Our task generation is centered around a specification language for I/O behavior we developed in earlier work. The task generation framework, an embedded domain specific language in Haskell, provides powerful primitives for the creation of various artifacts from specifications, including program code. We do not go into detail on the technical realization of these primitives. Our focus is on showcasing how such artifacts can be used as an" +"---\nabstract: 'Morphological analysis of longitudinal MR images plays a key role in monitoring disease progression for prostate cancer patients, who are placed under an active surveillance program. In this paper, we describe a learning-based image registration algorithm to quantify changes on regions of interest between a pair of images from the same patient, acquired at two different time points. Combining intensity-based similarity and gland segmentation as weak supervision, the population-data-trained registration networks significantly lowered the target registration errors (TREs) on holdout patient data, compared with those before registration and those from an iterative registration algorithm. Furthermore, this work provides a quantitative analysis on several longitudinal-data-sampling strategies and, in turn, we propose a novel regularisation method based on maximum mean discrepancy, between differently-sampled training image pairs. Based on 216 3D MR images from 86 patients, we report a mean TRE of 5.6 mm and show statistically significant differences between the different training data sampling strategies.'\nauthor:\n- Qianye Yang\n- Yunguan Fu\n- Francesco Giganti\n- Nooshin Ghavami\n- Qingchao Chen\n- 'J. Alison Noble'\n- Tom Vercauteren\n- Dean Barratt\n- Yipeng Hu\ntitle: 'Longitudinal Image Registration with Temporal-order and Subject-specificity Discrimination'\n---\n\nIntroduction\n============\n\nMultiparametric MR (mpMR) imaging" +"---\nabstract: 'We refer by *threshold Ornstein-Uhlenbeck* to a continuous-time threshold autoregressive process. It follows the Ornstein-Uhlenbeck dynamics when above or below a fixed level, yet at this level (threshold) its coefficients can be discontinuous. We discuss (quasi)-maximum likelihood estimation of the drift parameters, both assuming continuous and discrete time observations. In the ergodic case, we derive consistency and speed of convergence of these estimators in long time and high frequency. Based on these results, we develop a test for the presence of a threshold in the dynamics. Finally, we apply these statistical tools to short-term US interest rates modeling.'\nauthor:\n- 'Sara Mazzonetto[^1] \u00a0and Paolo Pigato[^2]'\nbibliography:\n- 'biblio.bib'\ntitle: ' Drift estimation of the threshold Ornstein-Uhlenbeck process from continuous and discrete observations '\n---\n\n[**Keywords:** ]{} Threshold diffusion, maximum likelihood, regime-switching, self-exciting process, interest rates, threshold Vasicek Model, multi-threshold.\n\n[**AMS 2010:** ]{} primary: 62M05; secondary: 62F12; 60J60.\\\n[**JEL Classification:**]{} primary: C58; secondary: C22, G12.\n\nIntroduction {#sec:intro}\n============\n\nWe consider the diffusion process solution to the following stochastic differential equation (SDE) $$\\label{eq:AffDOBM}\n X_t=X_0+\\int_0^t \\sigma(X_s){\\,\\mathrm{d}}W_s+\\int_0^t \\left( b(X_s) - a(X_s) \\, X_s \\right) {\\,\\mathrm{d}}s , \\quad t\\geq 0,$$ with piecewise constant volatility coefficient, possibly discontinuous at $r\\in {\\mathbb{R}}$, $$\\label{sigmaDOBM}\n \\sigma(x)= \\sigma_+" +"---\nabstract: 'Assessing the probability that two or more gravitational wave (GW) events are lensed images of the same source requires an understanding of the properties of the lensed images. For short enough wavelengths where wave effects can be neglected, lensed images will generically have a fixed relative phase shift that needs to be taken into account in the lensing hypothesis. For non-precessing, circular binaries dominated by quadrupole radiation these lensing phase shifts are degenerate with either a shift in the coalescence phase or a detector and inclination dependent shift in the orientation angle. This degeneracy is broken by the presence of higher harmonic modes with $|m|\\ne 2$ in the former and $|m| \\ne l$ in the latter. The presence of precession or eccentricity will also break this degeneracy. This implies that a lensed GW image will not necessarily be consistent with (unlensed) predictions from general relativity (GR). Therefore, unlike the conventional scenario of electromagnetic waves, strong lensing of GWs can lead to images with a modified phase evolution that can be observed. However, we find that for a wide range of parameters, the lensed (phase modified) waveform is similar enough to an unlensed (GR) waveform that GW detection pipelines" +"---\nabstract: 'We study Lense-Thirring precession of inviscid and viscous misaligned $\\alpha-$discs around a black hole using a gravitomagnetic term in the momentum equation. For weak misalignments, $i \\lesssim 10^{\\circ}$, the discs behave like rigid bodies, undergoing the full suite of classical harmonic oscillator dynamics including, weak and critically damped motion (due to viscosity), precession (due to Lense-Thirring torque) and nutation (due to apsidal precession). For strong misalignments, $i \\gtrsim 30^{\\circ}$, we find sufficiently thin, $h/r \\lesssim 0.05$ discs break, form a gap and the inner and outer sub-discs evolve quasi independently apart from slow mass transfer. Assuming the sound speed sets the communication speed of warps in the disc, we can estimate the breaking radius by requiring that the inner sub-disc precesses like a rigid body. We explicitly show for the first time using a grid code that an Einstein potential is needed to reproduce the analytic properties of the inner disc edge and find disc breaking. At large inclination angles we find multiple disc breaks, consistent with recent GRMHD simulations of highly inclined discs. Our results suggest that the inclusion of a gravitomagnetic term and appropriate pseudo-Newtonian potential captures the important quantitative features of misaligned discs.'\nauthor:\n-" +"---\nabstract: 'We numerically study the three-dimensional (3D) quantum Hall effect (QHE) and magnetothermoelectric transport of Weyl semimetals in the presence of disorder. We obtain a bulk picture that the exotic 3D QHE emerges in a finite range of Fermi energy around the Weyl points determined by the gap between the $n=-1$ and $n=1$ Landau levels (LLs). The quantized Hall conductivity is attributable to the chiral zeroth LLs traversing the gap, and is robust against disorder scattering for an intermediate number of layers in the direction of the magnetic field. Moreover, we predict several interesting characteristic features of the thermoelectric transport coefficients in the 3D QHE regime, which can be probed experimentally. This may open a new avenue for exploring Weyl physics in topological materials.'\naddress: |\n $^1$ Jiangsu Key Laboratory for Optoelectronic Detection of Atmosphere and Ocean, Nanjing University of Information Science and Technology, Nanjing 210044, China\\\n $^2$ Department of Physics and Astronomy, California State University, Northridge, California 91330, USA\\\n $^3$ National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093, China\\\n $^4$ Collaborative Innovation Center of Advanced Microstructures, Nanjing 210093, China\nauthor:\n- 'R. Ma$^{1,2}$'\n- 'D. N. Sheng$^{2}$'\n- 'L. Sheng$^{3,4}$'\ntitle: '" +"---\nabstract: |\n Among the current challenges in Space Weather, one of the main ones is to forecast the internal magnetic configuration within Interplanetary Coronal Mass Ejections (ICMEs). Currently, a monotonic and coherent magnetic configuration observed is associated with the result of a spacecraft crossing a large flux rope with helical magnetic field lines topology. The classification of such an arrangement is essential to predict geomagnetic disturbance. Thus, the classification relies on the assumption that the ICME\u2019s internal structure is a well organized magnetic flux rope. This paper applies machine learning and a current physical flux rope analytical model to identify and further understand the internal structures of ICMEs. We trained an image recognition artificial neural network with analytical flux rope data, generated from the range of many possible trajectories within a cylindrical (circular and elliptical cross-section) model. The trained network was then evaluated against the observed ICMEs from WIND during 1995-2015.\\\n The methodology developed in this paper can classify 84% of simple real cases correctly and has a 76% success rate when extended to a broader set with 5% noise applied, although it does exhibit a bias in favor of positive flux rope classification. As a first step towards" +"---\nauthor:\n- 'Michael [F\u00fctterer]{} & Jos\u00e9 [Villanueva]{}'\nbibliography:\n- 'literatura.bib'\nsubtitle: Una introducci\u00f3n a la Teor\u00eda de Iwasawa\ntitle: Torres infinitas sorprendentes\n---\n\nIntroducci\u00f3n {#introducci\u00f3n .unnumbered}\n============\n\nEn un sentido amplio, el objetivo de la teor\u00eda de n\u00fameros es estudiar como se comportan los n\u00fameros enteros o algebraicos y las ecuaciones que los involucran en diferentes situaciones. Muchas veces esta conducta puede ser descrita por estructuras algebraicas. Por ejemplo, el grupo de clases de ideales de un campo de n\u00fameros describe la descomposici\u00f3n en factores primos \u00fanicos en su anillo de enteros (o m\u00e1s bien su ausencia). Otro ejemplo es el grupo de Selmer de una curva el\u00edptica sobre un campo de n\u00fameros, que tiene que ver con la discrepancia entre la existencia de puntos globales en la curva (es decir, definidos sobre el campo de n\u00fameros) y locales (es decir, definidos sobre una completaci\u00f3n). Por eso, estos grupos son de mucho inter\u00e9s pero su estudio es intrincado.\n\nEste patr\u00f3n se puede extender a otras situaciones, la m\u00e1s general siendo la de un [ *motivo*]{} \u2013 un t\u00e9rmino que no vamos a explicar en este texto, pero es quiz\u00e1s el objeto de inter\u00e9s m\u00e1s general de la geometr\u00eda" +"---\nabstract: 'PSR\u00a0J1119$-$6127, a high-magnetic field pulsar detected from radio to high-energy wavelengths, underwent a magnetar-like outburst beginning on July 27, 2016. In this paper, we study the post-outburst multi-wavelength properties of this pulsar from the radio to GeV bands and discuss its similarity with the outburst of the magnetar XTE\u00a0J1810$-$197. In phase-resolved spectral analysis of 0.5\u201310keV X-ray data collected in August 2016, the on-pulse and off-pulse spectra are both characterized by two blackbody components and also require a power-law component similar to the hard X-ray spectra of magnetars. This power-law component is no longer distinguishable in data from December, 2016. We likewise find that there was no substantial shift between the radio and X-ray pulse peaks after the 2016 X-ray outburst. The gamma-ray pulsation after the X-ray outburst is confirmed with data taken after 2016 December and the pulse structure and phase difference between the gamma-ray and radio peaks ($\\sim$0.4 cycle) are also consistent with those before the X-ray outburst. These multi-wavelength observations suggest that the re-configuration of the global magnetosphere after 2016 magnetar-like outburst at most continued for about six months. We discuss the evolution of the X-ray emission after the 2016 outburst with the untwisting" +"---\nabstract: 'At the ultrafast limit of optical spin manipulation is the theoretically predicted phenomena of optical intersite spin transfer (OISTR), in which laser induced charge transfer between the sites of a multi-component material leads to control over magnetic order. A key prediction of this theory is that the demagnetization efficiency is determined by the availability of unoccupied states for intersite charge transfer. Employing state-of-the-art magneto-optical Kerr effect measurements with femtosecond time resolution, we probe this prediction via a systematic comparison of the ultrafast magnetic response between the 3*d* ferromagnets, Fe, Co, and Ni, and their respective Pt-based alloys and multilayers. We find that (i) the demagnetization efficiency in the elemental magnets increases monotonically from Fe, via Co to Ni and, (ii), that the gain in demagnetization efficiency of the multi-component system over the pure element counterpart scales with the number of empty 3*d* minority states, exactly as predicted by the OISTR effect. We support these experimental findings with *ab initio* time-dependent density functional theory calculations that we show to capture the experimental trends very well.'\nauthor:\n- Martin Borchert\n- Clemens von Korff Schmising\n- Daniel Schick\n- Dieter Engel\n- Sangeeta Sharma\n- Sam Shallcross\n- Stefan Eisebitt" +"---\nabstract: |\n We consider an acyclic network of single-server queues with heterogeneous processing rates. It is assumed that each queue is fed by the superposition of a large number of i.i.d.\u00a0Gaussian processes with stationary increments and positive drifts, which can be correlated across different queues. The flow of work departing from each server is split deterministically and routed to its neighbors according to a fixed routing matrix, with a fraction of it leaving the network altogether.\n\n We study the exponential decay rate of the probability that the steady-state queue length at any given node in the network is above any fixed threshold, also referred to as the \u2018overflow probability\u2019. In particular, we first leverage Schilder\u2019s sample-path large deviations theorem to obtain a general lower bound for the limit of this exponential decay rate, as the number of Gaussian processes goes to infinity. Then, we show that this lower bound is tight under additional technical conditions. Finally, we show that if the input processes to the different queues are non-negatively correlated, non short-range dependent fractional Brownian motions, and if the processing rates are large enough, then the asymptotic exponential decay rates of the queues coincide with the ones of" +"---\nabstract: |\n Alzheimer\u2019s disease is estimated to affect around 50 million people worldwide and is rising rapidly, with a global economic burden of nearly a trillion dollars. This calls for scalable, cost-effective, and robust methods for detection of Alzheimer\u2019s dementia (AD). We present a novel architecture that leverages acoustic, cognitive, and linguistic features to form a multimodal ensemble system. It uses specialized artificial neural networks with temporal characteristics to detect AD and its severity, which is reflected through Mini-Mental State Exam (MMSE) scores. We first evaluate it on the ADReSS challenge dataset, which is a subject-independent and balanced dataset matched for age and gender to mitigate biases, and is available through DementiaBank. Our system achieves state-of-the-art test accuracy, precision, recall, and F1-score of 83.3% each for AD classification, and state-of-the-art test root mean squared error (RMSE) of 4.60 for MMSE score regression. To the best of our knowledge, the system further achieves state-of-the-art AD classification accuracy of 88.0% when evaluated on the full benchmark DementiaBank Pitt database. Our work highlights the applicability and transferability of spontaneous speech to produce a robust inductive transfer learning model, and demonstrates generalizability through a task-agnostic feature-space. The source code is available at [`https://github.com/wazeerzulfikar/alzheimers-dementia`](https://github.com/wazeerzulfikar/alzheimers-dementia)\\" +"---\nabstract: 'We initiate the study of sorting permutations using *prefix block-interchanges*, which exchange any prefix of a permutation with another non-intersecting interval. The goal is to transform a given permutation into the identity permutation using as few such operations as possible. We give a 2-approximation algorithm for this problem, show how to obtain improved lower and upper bounds on the corresponding distance, and determine the largest possible value for that distance.'\nauthor:\n- Anthony Labarre\nbibliography:\n- 'sbpbi.bib'\ntitle: 'Sorting by Prefix Block-Interchanges'\n---\n\nIntroduction\n============\n\nThe problem of transforming two sequences into one another using a specified set of operations has received a lot of attention in the last decades, with applications in computational biology as *(genome) rearrangement problems*\u00a0[@fertin-combinatorics] as well as interconnection network design\u00a0[@lakshmivarahan-symmetry]. In the context of permutations, it can be equivalently formulated as follows: given a permutation $\\pi$ of $[n]=\\{1, 2, \\ldots, n\\}$ and a generating set $S$ (also consisting of permutations of $[n]$), find a minimum-length sequence of elements from $S$ that sorts $\\pi$. The problem is known to be -hard in general\u00a0[@jerrum-complexity] and -hard when parameterised by the length of a solution\u00a0[@Cai1997], but some families of operations that are" +"---\nabstract: 'Optical harmonic generation on excitons is found in ZnSe/BeTe quantum wells with type-II band alignment. Two experimental approaches with spectrally-broad femtosecond laser pulses and spectrally-narrow picosecond pulses are used for spectroscopic studies by means of second and third harmonic generation (SHG and THG). The SHG signal is symmetry-forbidden for light propagation along the structure\u2019s growth axis, which is the \\[001\\] crystal axis, but can be induced by an external magnetic field. The THG signal is detected at zero magnetic field and its intensity is field independent. A group theory analysis of SHG and THG rotational anisotropy diagrams allows us to identify the involved excitation mechanisms.'\nauthor:\n- 'Johannes Mund$^1$, Andreas Farenbruch$^1$, Dmitri R. Yakovlev$^{1,2}$, Andrei A. Maksimov$^3$, Andreas Waag$^4$, and Manfred Bayer$^{1,2}$'\ntitle: |\n Optical second and third harmonic generation on excitons\\\n in ZnSe/BeTe quantum wells\n---\n\nIntroduction {#sec.introduction}\n============\n\nNonlinear optical spectroscopy is a powerful technique to study electronic states in solids\u00a0[@Shen_book; @Boyd_book]. Among the variety of the used approaches multi-photon processes, like optical second harmonic generation (SHG), are of particular interest as they give access to the symmetry of the electronic states involved in the optical transitions\u00a0[@Fiebig05; @Pisarev10]. Using magnetic- or electric-field-induced SHG one" +"---\nabstract: 'In the mid $1990$\u2019s Seiberg and Witten determined the exact low energy effective action of ${\\mathcal{N}}=2$ supersymmetric Yang-Mills theory with gauge group $SU(2)$. Later, in the early $2000$\u2019s Nekrasov calculated this action directly using localisation techniques. This work serves as an introduction to the area, developing both approaches and reconciling their results.'\nauthor:\n- |\n Robert Pryor\\\n School of Mathematics and Statisitcs\\\n University of Melbourne\\\n Submitted for the degree of Master of Science\\\nbibliography:\n- 'bib.bib'\ndate: May 2020\ntitle: 'The Exact Low Energy Action of ${\\mathcal{N}}=2$ SYM via Seiberg-Witten Theory and Localisation\\'\n---\n\nAcknowledgements {#acknowledgements .unnumbered}\n================\n\nI would like to thank my supervisor, the late Professor Omar Foda, for his encouragement, insight and patience. I consider myself very lucky to have had the opportunity to work with someone who cared so much for his field and the people in it. Without his guidance I would have remained unaware of the fascinating world of supersymmetry.\n\nI would also like to thank Doctor Thomas Quella for his support and assistance in the final stage of preparing my thesis. In particular I appreciate both his readiness to review my thesis and his offer to put me in contact with" +"---\nabstract: |\n The path to understanding star formation processes begins with the study of the formation of molecular clouds. The outskirts of these clouds are characterized by low column densities that allow the penetration of ultraviolet radiation, resulting in a non-negligible ionization fraction and the charging of the small dust grains that are mixed with the gas; this diffuse phase is then coupled to the ambient magnetic field.\\\n Despite the general assumption that dust and gas are tightly correlated, several observational and theoretical studies have reported variations in the dust-to-gas ratio toward diffuse and cold clouds.\\\n In this work, we present the implementation of a new charged particles module for analyzing the dust dynamics in molecular cloud envelopes. We study the evolution of a single population of small charged grains (0.05 $\\mu$m) in the turbulent, magnetized molecular cloud envelope using this module. We show that variations in the dust-to-gas ratio arise due to the coupling of the grains with the magnetic field, forming elongated dust structures decoupled from the gas. This emphasizes the importance of considering the dynamics of charged dust when simulating the different phases of the interstellar medium, especially for star formation studies.\nauthor:\n- 'Leire Beitia-Antero'" +"---\nabstract: 'Recent advances in vehicle connectivity have allowed formation of autonomous vehicle platoons for improved mobility and traffic throughput. In order to avoid a pile-up in such platoons, it is important to ensure platoon (string) stability, which is the focus of this work. As per conventional definition of string stability, the power (2-norm) of the spacing error signals should not amplify downstream in a platoon. But in practice, it is the infinity-norm of the spacing error signal that dictates whether a collision occurs. We address this discrepancy in the first part of our work, where we reconsider string stability from a safety perspective and develop an upper limit on the maximum spacing error in a homogeneous platoon as a function of the acceleration maneuver of the lead vehicle. In the second part of this paper, we extend our previous results by providing the minimum achievable time headway for platoons with two-predecessor lookup schemes experiencing burst-noise packet losses. Finally, we utilize throttle and brake maps to develop a longitudinal vehicle model and validate it against a Lincoln MKZ which is then used for numerical corroboration of the proposed time headway selection algorithms.'\nauthor:\n- 'Vamsi\u00a0Vegamoor,\u00a0, Sivakumar\u00a0Rathinam,\u00a0 and" +"---\nabstract: 'The role of multilayer networks on the emergence of several real world phenomena has been impacted network science research in recent years. The second smallest eigenvalue of the Laplacian matrix, known as algebraic connectivity, is determinative in characterizing properties such as diffusion speed and robustness. In this paper, we go beyond the special structure of one-to-one interconnection and study multilayer networks with arbitrary interconnections and investigate the problem of maximizing algebraic connectivity by allocating interlink weights subject to a limited total budget $c$. We show that our formulated optimization problem is impacted by a threshold budget $c^*$ below which the maximum algebraic connectivity reaches a known upper-bound that is subject to regular optimal weights\u2013that may or may not be uniform depending on the interlayer structure. For efficient numerical approaches in regions of no analytical solution, we cast the problem into a convex optimization and considered the primal-dual setting to enable exploration from several perspectives. Particularly, a geometric transformation of dual variables leads to a graph embedding problem that is easier to interpret and is related to optimum diffusion phases, as well as to interlayer and intralayer interactions, in each region. Allowing arbitrary interconnections entails regions of multiple transitions," +"---\nabstract: 'The need for cables with high-fidelity Virtual Reality (VR) headsets remains a stumbling block on the path towards interactive multi-user VR. Due to strict latency constraints, designing fully wireless headsets is challenging, with the few commercially available solutions being expensive. These solutions use proprietary millimeter wave (mmWave) communications technologies, as extremely high frequencies are needed to meet the throughput and latency requirements of VR applications. In this work, we investigate whether such a system could be built using specification-compliant IEEE 802.11ad hardware, which would significantly reduce the cost of wireless mmWave VR solutions. We present a theoretical framework to calculate attainable live VR video bitrates for different IEEE 802.11ad channel access methods, using 1 or more head-mounted displays connected to a single Access Point (AP). Using the ns-3 simulator, we validate our theoretical framework, and demonstrate that a properly configured IEEE 802.11ad AP can support at least 8 headsets receiving a 4K video stream for each eye, with transmission latency under 1 millisecond.'\nauthor:\n- \ntitle: 'Towards Ultra-Low-Latency mmWave Wi-Fi for Multi-User Interactive Virtual Reality\\'\n---\n\nIntroduction\n============\n\nThe interest in has steadily increased since the field\u2019s revitalisation following the announcement of the Oculus Rift. Originally intended as" +"---\nabstract: 'A recent scientific debate has arisen: Which processes underlie the actual ground of the valley Hall effect (VHE) in two-dimensional materials? The original VHE emerges in samples with ballistic transport of electrons due to the anomalous velocity terms resulting from the Berry phase effect. In disordered samples though, alternative mechanisms associated with electron scattering off impurities have been suggested: (i) asymmetric electron scattering, called skew scattering, and (ii) a shift of the electron wave packet in real space, called a side-jump. It has been claimed that the side-jump not only contributes to the VHE but fully offsets the anomalous terms regardless of the drag force for fundamental reasons, and thus, the side-jump together with skew scattering become the dominant mechanisms. However, this claim is based on equilibrium theories without any external valley-selective optical pumping, which makes the results fundamentally interesting but incomplete and impracticable. We develop in this paper microscopic theory of the photoinduced VHE using the Keldysh nonequilibrium diagrammatic technique, and find out that the asymmetric skew scattering mechanism is dominant in the vicinity of the interband absorption edge. This allows us to explain the operation of optical transistors based on the VHE.'\nauthor:\n- 'I.\u00a0Vakulchyk'" +"---\nabstract: |\n Meta-learning researchers face two fundamental issues in their empirical work: prototyping and reproducibility. Researchers are prone to make mistakes when prototyping new algorithms and tasks because modern meta-learning methods rely on unconventional functionalities of machine learning frameworks. In turn, reproducing existing results becomes a tedious endeavour \u2013 a situation exacerbated by the lack of standardized implementations and benchmarks. As a result, researchers spend inordinate amounts of time on implementing software rather than understanding and developing new ideas.\n\n This manuscript introduces `learn2learn`, a library for meta-learning research focused on solving those prototyping and reproducibility issues. `learn2learn` provides low-level routines common across a wide-range of meta-learning techniques (e.g. meta-descent, meta-reinforcement learning, few-shot learning), and builds standardized interfaces to algorithms and benchmarks on top of them. In releasing `learn2learn` under a free and open source license, we hope to foster a community around standardized software for meta-learning research.\nauthor:\n- 'S\u00e9bastien M. R. Arnold[^1]'\n- Praateek Mahajan\n- Debajyoti Datta\n- Ian Bunner\n- Konstantinos Saitas Zarkias\nbibliography:\n- 'paper.bib'\n- 'paper.bib'\ntitle: '`learn2learn`: A Library for Meta-Learning Research'\n---\n\nIntroduction\n============\n\nMeta-learning is the subfield of machine learning that endows computer programs with the ability of learning to learn." +"---\nabstract: 'We report an astronomical detection of for the first time in the interstellar medium with the Green Bank Telescope toward the TMC-1 molecular cloud with a minimum significance of $10.5 \\sigma$. The total column density and excitation temperature of are determined to be $3.29^{+8.60}_{-1.20}\\times 10^{11}$\u00a0cm$^{-2}$ and $6.7^{+0.3}_{-0.3} \\mathrm{\\ K}$, respectively, using the MCMC analysis. In addition to , is distinctly detected whereas no clear detection of is made. We propose that the dissociative recombination of the protonated cyanopolyyne, , and the protonated isocyanopolyyne, , are the main formation mechanisms for while its destruction is dominated by reactions with simple ions and atomic carbon. With the proposed chemical networks, the observed abundances of and are reproduced satisfactorily.'\nauthor:\n- Ci Xue\n- 'Eric R. Willis'\n- 'Ryan A. Loomis'\n- Kin Long Kelvin Lee\n- 'Andrew M. Burkhardt'\n- 'Christopher N. Shingledecker'\n- 'Steven B. Charnley'\n- 'Martin A. Cordiner'\n- Sergei Kalenskii\n- 'Michael C. McCarthy'\n- Eric Herbst\n- 'Anthony J. Remijan'\n- 'Brett A. McGuire'\ntitle: 'Detection of Interstellar and an Investigation of Isocyanopolyyne Chemistry in TMC-1 Conditions'\n---\n\nIntroduction\\[sec:intro\\]\n=========================\n\nUnderstanding the formation and destruction routes of molecules in astronomical environments remains one of" +"---\naddress: 'Box 1484, Deep River, Ont. Canada. K0J 1P0'\nauthor:\n- Michael Milgram\nbibliography:\n- 'biblio.bib'\n---\n\n0.3 in\n\nA Series Representation for Riemann\u2019s Zeta Function and some Interesting Identities that Follow.\n\n.3in .2in\n\nMichael Milgram[^1]\n\nConsulting Physicist, Geometrics Unlimited, Ltd.\n\nBox 1484, Deep River, Ont. Canada. K0J 1P0\n\nSept. 9, 2020\n\n.1in\n\nConsulting Physicist, Geometrics Unlimited, Ltd.\n\nMarch 2, 2017\n\n.1in MSC classes: 11M06, 11M26, 11M35, 11M99, 26A09, 30B40, 30E20, 33C20, 33B20, 33B99 0.1in Keywords: Riemann Zeta Function, Dirichlet Eta function, alternating Zeta function, incomplete Zeta function, Generalized Exponential Integral, Bernoulli numbers, Euler numbers, Harmonic numbers, infinite series, evaluation of integrals 0.1in\n\n**Abstract**\n\n.3in\n\nUsing Cauchy\u2019s Integral Theorem as a basis, what may be a new series representation for Dirichlet\u2019s function $\\eta(s)$, and hence Riemann\u2019s function $\\zeta(s)$, is obtained in terms of the Exponential Integral function $E_{s}(i\\kappa)$ of complex argument. From this basis, infinite sums are evaluated, unusual integrals are reduced to known functions and interesting identities are unearthed. The incomplete functions $\\zeta^{\\pm}(s)$ and $\\eta^{\\pm}(s)$ are defined and shown to be intimately related to some of these interesting integrals. An identity relating Euler, Bernouli and Harmonic numbers is developed. It is demonstrated that a known simple integral with" +"---\nauthor:\n- Francesco Marzari\nbibliography:\n- 'biblio.bib'\ndate: 'Received ....; accepted .....'\ntitle: 'Ring dynamics around an oblate body with an inclined satellite: The case of Haumea'\n---\n\n[The recent discovery of rings and massive satellites around minor bodies and dwarf planets suggests that they may often coexist, as for example around Haumea.]{} [A ring perturbed by an oblate central body and by an inclined satellite may disperse on a short timescale. The conditions under which a ring may survive are explored both analytically and numerically. ]{} [The trajectories of ring particles are integrated under the influence of the gravitational field of a triaxial ellipsoid and (a) massive satellite(s), including the effects of collisions. ]{} [A ring initially formed in the equatorial plane of the central body will be disrupted if the satellite has an inclination in the Kozai\u2013Lidov regime ($ 39.2^o < i < 144.8$). For lower inclinations, the ring may relax to the satellite orbital plane thanks to an intense collisional damping. On the other hand, a significant J2 term easily suppresses the perturbations of an inclined satellite within a critical semi\u2013major axis, even in the case of Kozai\u2013Lidov cycles. However, if the ring is initially inclined" +"---\nabstract: |\n This work proposes a new method for computing acceptance regions of exact multinomial tests. From this an algorithm is derived, which finds exact $p$-values for tests of simple multinomial hypotheses. Using concepts from discrete convex analysis, the method is proven to be exact for various popular test statistics, including Pearson\u2019s chi-square and the log-likelihood ratio. The proposed algorithm improves greatly on the naive approach using full enumeration of the sample space. However, its use is limited to multinomial distributions with a small number of categories, as the runtime grows exponentially in the number of possible outcomes.\n\n The method is applied in a simulation study, and uses of multinomial tests in forecast evaluation are outlined. Additionally, properties of a test statistic using probability ordering, referred to as the \u201cexact multinomial test\u201d by some authors, are investigated and discussed. The algorithm is implemented in the accompanying R package `ExactMultinom`.\n\n [*Keywords:*]{} Acceptance regions; goodness-of-fit test; log-likelihood ratio; Pearson\u2019s chi-square; probability mass statistic; R software\nauthor:\n- |\n Johannes Resin[^1]\\\n Heidelberg Institute for Theoretical Studies\\\n Karlsruhe Institute of Technology\nbibliography:\n- 'manuscript\\_arxiv.bib'\ntitle: '**A Simple Algorithm for Exact Multinomial Tests**'\n---\n\nIntroduction\n============\n\nMultinomial goodness-of-fit tests feature prominently in the statistical" +"---\nabstract: 'In this article, we discuss some of the recent developments in applying machine learning (ML) techniques to nonlinear dynamical systems. In particular, we demonstrate how to build a suitable ML framework for addressing two specific objectives of relevance: prediction of future evolution of a system and unveiling from given time-series data the analytical form of the underlying dynamics. This article is written in a pedagogical style appropriate for a course in nonlinear dynamics or machine learning.'\nauthor:\n- |\n Sayan Roy[^1]\\\n *Department of Physics,*\\\n *Indian Institute of Science Education and Research Bhopal,*\\\n *Bhopal Bypass Road, Bhauri, Bhopal, Madhya Pradesh, 462066, India*\\\n \\\n Debanjan Rana[^2]\\\n *Department of Chemistry,*\\\n *Indian Institute of Science Education and Research Bhopal,*\\\n *Bhopal Bypass Road, Bhauri, Bhopal, Madhya Pradesh, 462066, India*\ntitle: Machine Learning in Nonlinear Dynamical Systems\n---\n\nIntroduction\n============\n\nStudy of dynamics has fascinated mankind for many centuries. One of the early things that intrigued the human mind was the motion of objects, both animate and inanimate [@CM]. Interest and curiosity in understanding the motion of planetary objects and various natural phenomena such as wind, rain, etc. led to the development of the field of nonlinear dynamics as a major branch of study" +"---\nabstract: 'Local differential privacy has become the gold-standard of privacy literature for gathering or releasing sensitive individual data points in a privacy-preserving manner. However, locally differential data can twist the probability density of the data because of the additive noise used to ensure privacy. In fact, the density of privacy-preserving data (no matter how many samples we gather) is always flatter in comparison with the density function of the original data points due to convolution with privacy-preserving noise density function. The effect is especially more pronounced when using slow-decaying privacy-preserving noises, such as the Laplace noise. This can result in under/over-estimation of the heavy-hitters. This is an important challenge facing social scientists due to the use of differential privacy in the 2020 Census in the United States. In this paper, we develop density estimation methods using smoothing kernels. We use the framework of deconvoluting kernel density estimators to remove the effect of privacy-preserving noise. This approach also allows us to adapt the results from non-parametric regression with errors-in-variables to develop regression models based on locally differentially private data. We demonstrate the performance of the developed methods on financial and demographic datasets.'\nauthor:\n- 'Farhad Farokhi [^1]'\nbibliography:\n- 'citation.bib'" +"---\nauthor:\n- 'Q. D\u2019Amato [^1]'\n- 'R. Gilli'\n- 'I. Prandoni'\n- 'C. Vignali'\n- 'M. Massardi'\n- 'M. Mignoli'\n- 'O. Cucciati'\n- 'T. Morishita'\n- 'R. Decarli'\n- 'M. Brusa'\n- 'F. Calura'\n- 'B. Balmaverde'\n- 'M. Chiaberge'\n- 'E. Liuzzo'\n- 'R. Nanni'\n- 'A. Peca'\n- 'A. Pensabene'\n- 'P. Tozzi'\n- 'C. Norman'\nbibliography:\n- 'J1030\\_ALMA.bib'\ndate: 'Received XXX; accepted XXX'\ntitle: 'Discovery of molecular gas fueling galaxy growth in a protocluster at z=1.7'\n---\n\nIntroduction\n============\n\nGalaxy clusters are the largest virialized structures in the Universe. The physical processes leading to their formation are mostly investigated through numerical simulations, which can now be tested through direct observations of cluster progenitors. Cosmological simulations show that most of the large-scale structure assembly takes place at a redshift between z${\\sim}$4 and z${\\sim}$1 [@boylan_2009]. At this epoch, the cosmic star formation rate (SFR) and black hole accretion peak, and most of the galaxy stellar mass is built [@madau_2014]. Protoclusters (i.e., large-scale nonvirialized structures that will collapse into galaxy clusters of at least $10^{14}~\\mathrm{M_\\odot}$; @bower_2004 [-@bower_2004]) represent the early stages of this assembly (see @overzier_2016 [-@overzier_2016] for a review). While their gravitational collapse can be studied theoretically" +"---\nabstract: 'Autonomous or teleoperated robots have been playing increasingly important roles in civil applications in recent years. Across the different civil domains where robots can support human operators, one of the areas where they can have more impact is in search and rescue (SAR) operations. In particular, multi-robot systems have the potential to significantly improve the efficiency of SAR personnel with faster search of victims, initial assessment and mapping of the environment, real-time monitoring and surveillance of SAR operations, or establishing emergency communication networks, among other possibilities. SAR operations encompass a wide variety of environments and situations, and therefore heterogeneous and collaborative multi-robot systems can provide the most advantages. In this paper, we review and analyze the existing approaches to multi-robot SAR support, from an algorithmic perspective and putting an emphasis on the methods enabling collaboration among the robots as well as advanced perception through machine vision and multi-agent active perception. Furthermore, we put these algorithms in the context of the different challenges and constraints that various types of robots (ground, aerial, surface or underwater) encounter in different SAR environments (maritime, urban, wilderness or other post-disaster scenarios). This is, to the best of our knowledge, the first review considering" +"---\nabstract: 'We introduce a general approach for the study of the collective dynamics of non-interacting random walkers on connected networks. We analyze the movement of $R$ independent (Markovian) walkers, each defined by its own transition matrix. By using the eigenvalues and eigenvectors of the $R$ independent transition matrices, we deduce analytical expressions for the collective stationary distribution and the average number of steps needed by the random walkers to start in a particular configuration and reach specific nodes the first time (mean first-passage times), as well as global times that characterize the global activity. We apply these results to the study of mean first-encounter times for local and non-local random walk strategies on different types of networks, with both synchronous and asynchronous motion.'\nauthor:\n- 'Alejandro P.\u00a0Riascos'\n- 'David P.\u00a0Sanders'\ntitle: Mean encounter times for multiple random walkers on networks\n---\n\nIntroduction\n============\n\nThe study and understanding of dynamical processes taking place on networks have had a significant impact with important contributions in science [@VespiBook; @barabasi2016book; @NewmanBook]. In particular, the dynamics of a random walker that visits the nodes of networks following different strategies is a challenging theoretical problem where the relation between network topology and the" +"---\ndate: |\n Shaswata Chowdhury, Tapobrata Sarkar [^1] 0.4cm [*Department of Physics,\\\n Indian Institute of Technology,\\\n Kanpur 208016,\\\n India*]{}\ntitle: Modified gravity in the interior of population II stars\n---\n\nIntroduction\n============\n\nThe theory of general relativity (GR), formulated by Einstein more than a century ago, is the most successful theory of gravity, and has been validated by several precision tests. More recently, issues relating to the observed cosmic acceleration and the cosmological constant seem to indicate the necessity of modifications to GR, where one might add extra degrees of freedom to the Einstein-Hilbert action of GR. Such theories, popularly termed as \u201cmodified gravity\u201d (see, e.g. the excellent reviews of [@CliftonRev] - [@KaseRev]), are becoming increasingly popular in the literature. Some of the best studied [*avatars*]{} of modified gravity are the so called scalar-tensor theories (STTs). In these scenarios, the compatibility of modifications to GR effects in solar system tests, necessitate invoking some kind of screening mechanism, the most efficient being the Vainshtein mechanism [@Vainshtein] (see, e.g. [@JainKhoury], [@BabichevRev] for reviews), where GR is recovered in the near regime via a non-linear screening of modified gravity.\n\nThe most general physical (i.e ghost-free) theories of a scalar field coupled to gravity" +"---\nabstract: 'In order to contain the COVID-19 pandemic, countries around the world have introduced social distancing guidelines as public health interventions to reduce the spread of the disease. However, monitoring the efficacy of these guidelines at a large scale (nationwide or worldwide) is difficult. To make matters worse, traditional observational methods such as in-person reporting is dangerous because observers may risk infection. A better solution is to observe activities through network cameras; this approach is scalable and observers can stay in safe locations. This research team has created methods that can discover thousands of network cameras worldwide, retrieve data from the cameras, analyze the data, and report the sizes of crowds as different countries issued and lifted restrictions (also called \u201clockdown\u201d). We discover 11,140 network cameras that provide real-time data and we present the results across 15 countries. We collect data from these cameras beginning April 2020 at approximately 0.5TB per week. After analyzing 10,424,459 images from still image cameras and frames extracted periodically from video, the data reveals that the residents in some countries exhibited more activity (judged by numbers of people and vehicles) after the restrictions were lifted. In other countries, the amounts of activities showed no" +"---\nabstract: 'We present a synthesis of the astronomical observations constraining the wavelength-dependent extinction, emission, and polarization from interstellar dust from UV to microwave wavelengths on diffuse Galactic sightlines. Representative solid phase abundances for those sightlines are also derived. Given the sensitive new observations of polarized dust emission provided by the [*Planck*]{} satellite, we place particular emphasis on dust polarimetry, including continuum polarized extinction, polarization in the carbonaceous and silicate spectroscopic features, the wavelength-dependent polarization fraction of the dust emission, and the connection between optical polarized extinction and far-infrared polarized emission. Together, these constitute a set of constraints that should be reproduced by models of dust in the diffuse interstellar medium.'\nauthor:\n- 'Brandon S. Hensley'\n- 'B. T. Draine'\nbibliography:\n- 'mybib.bib'\ntitle: 'Observational Constraints on the Physical Properties of Interstellar Dust in the Post-[*Planck*]{} Era'\n---\n\nIntroduction\n============\n\nInterstellar dust is manifest at nearly all wavelengths of astronomical interest, scattering, absorbing, and emitting radiation from X-ray to radio wavelengths. Embedded in this diversity of phenomena are clues to the nature of interstellar grains\u2014their size, shape, composition, and optical properties.\n\nA combination of astronomical observations, laboratory studies, and theoretical calculations has informed a picture of interstellar dust that consists" +"---\nabstract: |\n A popular technique for selecting and tuning machine learning estimators is cross-validation. Cross-validation evaluates overall model fit, usually in terms of predictive accuracy. In causal inference, the optimal choice of estimator depends not only on the fitted models, but also on assumptions the statistician is willing to make. In this case, the performance of different (potentially biased) estimators cannot be evaluated by checking overall model fit.\n\n We propose a model selection procedure that estimates the squared $\\ell_2$-deviation of a finite-dimensional estimator from its target. The procedure relies on knowing an asymptotically unbiased \u201cbenchmark estimator\u201d of the parameter of interest. Under regularity conditions, we investigate bias and variance of the proposed criterion compared to competing procedures and derive a finite-sample bound for the excess risk compared to an oracle procedure. The resulting estimator is discontinuous and does not have a Gaussian limit distribution. Thus, standard asymptotic expansions do not apply. We derive asymptotically valid confidence intervals that take into account the model selection step.\n\n The performance of the approach for estimation and inference for average treatment effects is evaluated on simulated data sets, including experimental data, instrumental variables settings and observational data with selection on observables.\nauthor:\n-" +"---\ntitle: Cryogenic cometary sample return\n---\n\nScientific/Technical/Management 1\\\n1 Executive Summary 1\\\n2 Glossary 1\\\n3 Relevance 3\\\n4 [*Terra Incognita*]{}: cryogenic extraterrestrial materials 3\\\n5 Previous work: Veverka [*et al.*]{} study 6\\\n6 Site selection 6\\\n6.1 Rosetta observations of Comet 67P 6\\\n7 Sample acquisition 8\\\n7.1 Sample requirements 8\\\n7.2 Sampling technology 9\\\n7.3 Sampling verification and risk mitigation 10\\\n8 Sample preservation and return 11\\\n8.1 Thermal requirements for preservation of petrological context of ices 11\\\n8.2 Cryocooler technology 11\\\n9 Direct Earth return vs. DSG-enabled return 12\\\n10 Sample recovery and transport to JSC13\\\n10.1 Post-recovery processing 13\\\n10.2 Curation of cryogenic samples 13\\\n10.3 Analyses of cryogenic samples 13\\\n11 Work plan and schedule 14\\\n12 Roles and responsibilities of project personnel 14\\\nReferences 16\\\nBiographical sketches 19\\\nAndrew Westphal\\\nAnna Butterworth\\\nNancy Chabot\\\nJamie Elsila\\\nNeil Dello Russo\\\nMichael Evans\\\nLarry Nittler\\\nJoseph Nuth\\\nScott Sandford\\\nRhonda Stroud\\\nJohn Tomsick\\\nRonald Vervack\\\nHarold Weaver\\\nMicheal Zolensky\\\nTable of Work Effort 33\\\nCurrent and Pending Support 34\\\nAndrew Westphal\\\nBudget Narrative 35\\\nFacilities and Equipment 36\\\nRedacted Budget 38\n\n[**Cryogenic Comet Sample Return**]{} \u00a0\\\n\u00a0\\\n[Andrew J. Westphal$^1$, Larry R. Nittler$^2$, Rhonda Stroud$^3$," +"---\nabstract: 'This paper considers the quickest detection problem for hidden Markov models (HMMs) in a Bayesian setting. We construct an augmented HMM representation of the problem that allows the application of a dynamic programming approach to prove that Shiryaev\u2019s rule is an (exact) optimal solution. This augmented representation highlights the problem\u2019s fundamental information structure and suggests possible relaxations to more exotic change event priors not appearing in the literature. Finally, this augmented representation allows us to present an efficient computational method for implementing the optimal solution.'\naddress:\n- 'School of Electrical Engineering and Robotics, University of Technology (QUT), Brisbane, QLD 4000, Australia'\n- 'School of Mechanical & Mining Engineering, University of Queensland (UQ), Brisbane, QLD 4000, Australia'\n- 'School of Engineering, Australian National University (ANU), Acton, ACT 2601, Australia'\nauthor:\n- 'Jason J.\u00a0Ford'\n- Jasmin James\n- 'Timothy L.\u00a0Molloy'\nbibliography:\n- 'ref.bib'\ntitle: Exactly Optimal Bayesian Quickest Change Detection for Hidden Markov Models\n---\n\n,\n\nand\n\n,\n\nIntroduction\n============\n\nQuickest change detection (QCD) problems are concerned with the quickest (on-line) detection of a change in the statistical properties of an observed process. Such problems naturally arise in a wide variety of applications including quality control [@nikiforov], target" +"---\nauthor:\n- 'Elaine Huynh, Angela Nyhout, Patricia Ganea, and Fanny Chevalier'\nbibliography:\n- 'biblio.bib'\ntitle: |\n Designing Narrative-Focused Role-Playing Games\\\n for Visualization Literacy in Young Children\n---\n\nIntroduction\n============\n\nFrom global-scale events such as climate change and public health crises, to societal topics such as economics, education, culture, or nutrition, many infographics, tables, and charts are created and circulated daily to help the general population better understand what is happening and what actions need to be taken. Graphical representations of data can exert considerable influence on the readership: the mere presence of a chart \u2013 regardless of triviality \u2013 is capable of convincing readers of the scientific credibility of an article [@Tal2016]. An important caveat is that, when designing data visualizations, parties may present information in ways mislead or deceive readers [@Pandey2015], whether intentionally or not. As such, the ability to understand and appropriately handle data visualizations \u2013 or more succinctly, *visualization literacy* [@Boy:2014; @Chevalier:2018] \u2013 is a vital skill. Yet, despite its importance, visualization literacy amongst the general public remains low [@Borner:2016].\n\nRecently, the visualization community has begun to acknowledge and address this deficiency. Children are exposed to charts and other data management topics as early as grade" +"---\nabstract: 'The basic Susceptible-Infected-Recovered (SIR) model is extended to include effects of progressive social awareness, lockdowns and anthropogenic migration. It is found that social awareness can effectively contain the spread by lowering the basic reproduction rate $R_0$. Interestingly, the awareness is found to be more effective in a society which can adopt the awareness faster compared to the one having a slower response. The paper also separates the mortality fraction from the clinically recovered fraction and attempts to model the outcome of lockdowns, in absence and presence of social awareness. It is seen that staggered exits from lockdowns are not only economically beneficial but also helps to curb the infection spread. Moreover, a staggered exit strategy with progressive social awareness is found to be the most efficient intervention. The paper also explores the effects of anthropogenic migration on the dynamics of the epidemic in a two-zone scenario. The calculations yield dissimilar evolution of different fractions in different zones. Such models can be convenient to strategize the division of a large zone into smaller sub-zones for a disproportionate imposition of lockdown, or, an exit from one. Calculations are done with parameters consistent with the SARS-COV-2 pathogen in the Indian context.'" +"---\nabstract: 'We introduce a dispersion approximation of weak, entropy solutions of multidimensional scalar conservation laws using variational kinetic representation, where equilibrium densities satisfy Gibb\u2019s entropy minimization principle for a piecewise linear, convex entropy. For such solutions, we show that small scale discontinuities, measured by the entropy increments, propagate with characteristic velocities, while the large scale, shock-type discontinuities propagate with speeds close to the speeds of classical shock waves. In the zero-limit of the scale parameter, approximate solutions converge to a unique, entropy solution of a scalar conservation law.'\nauthor:\n- 'Misha Perepelitsa [^1]'\nbibliography:\n- 'references.bib'\ntitle: Small dispersion approximation of shock wave dynamics\n---\n\nShock waves, entropy solutions, kinetic equations\n\n35L60, 35L65\n\nIntroduction\n============\n\nConsider the Cauchy problem for a quasilinear system $$\\label{QS}\n\\begin{cases}\n\\partial_t U{}+{}\\sum_{i=1}^d\\partial_{x_i} F_i(U){}={}0,\\quad (x,t)\\in\\mathbb{R}^{d+1}_+,\\\\\nU(x,0)=U_0(x),\\quad x\\in\\mathbb{R}^d,\n\\end{cases}$$ where $U:\\mathbb{R}^{d+1}_+\\to\\mathbb{R}^m, $ $F_i:\\mathbb{R}^m\\to\\mathbb{R}^{m}.$ The main difficulty in constructing weak solutions for quasilinear systems is the lack of apriori estimates on solutions in norms that control oscillations. This limits the application of such methods as viscosity or relaxation approximations of for which pointwise convergence of approximate solutions is hard to establish.\n\nThe difficulty is well illustrated on an example of a shock wave. For systems with a" +"---\nauthor:\n- 'Alexandra Lee, Daniel Archambault, and Miguel A. Nacenta'\nbibliography:\n- 'master.bib'\ntitle: The Effectiveness of Interactive Visualization Techniques for Time Navigation of Dynamic Graphs on Large Displays\n---\n\nDynamic networks are networks that change over time. Nodes and links might appear or disappear at different points in time and attribute values may change. Dynamic networks appear in many domains including social science\u00a0[@brown_fischer_goldwich_keller_young_plener_2017], transportation\u00a0[@gallotti2015multilayer], digital communications\u00a0[@gloor2004tecflow], epidemiology\u00a0[@masuda2013predicting], and others. These networks are difficult to analyze and interpret and can therefore benefit from having interactive visualization techniques applied to them.\n\nDynamic networks are most commonly visualized by two approaches\u00a0[@muller_visualization_2003; @Beck2017; @kerracher_design_2014; @Bach2017]. One approach is an interactive animated representation where the user can control which moment in time is being displayed. The other is to split the time domain into a series of timeslices and represent them separately as small multiples. This latter approach is currently the most popular in the literature. Multiple studies have shown that the small multiples approach is faster than interactive animation with no significant differences in error rate\u00a0[@5473226; @farrugia_effective_2011; @Archambault2016].\n\nThe above approaches and experiments all assume uniform slicing at a given level of granularity. However, what uniform" +"---\nabstract: |\n In this work we consider active [*local learning*]{}: given a query point $x$, and active access to an unlabeled training set $S$, output the prediction $h(x)$ of a near-optimal $h \\in H$ using significantly fewer labels than would be needed to actually learn $h$ fully. In particular, the number of label queries should be independent of the complexity of $H$, and the function $h$ should be well-defined, independent of $x$. This immediately also implies an algorithm for [*distance estimation*]{}: estimating the value $opt(H)$ from many fewer labels than needed to actually learn a near-optimal $h \\in H$, by running local learning on a few random query points and computing the average error.\n\n For the hypothesis class consisting of functions supported on the interval $[0,1]$ with Lipschitz constant bounded by $L$, we present an algorithm that makes $O(({1 / \\epsilon^6}) \\log(1/\\epsilon))$ label queries from an unlabeled pool of $O(({L / \\epsilon^4})\\log(1/\\epsilon))$ samples. It estimates the distance to the best hypothesis in the class to an additive error of $\\epsilon$ for an arbitrary underlying distribution. We further generalize our algorithm to more than one dimensions. We emphasize that the number of labels used is independent of the complexity of" +"---\nabstract: 'This article studies the impact of carbon risk on stock pricing. To address this, we consider the seminal approach of @Gorgen-2019, who proposed estimating the carbon financial risk of equities by their carbon beta. To achieve this, the primary task is to develop a brown-minus-green (or BMG) risk factor, similar to @Fama-1992. Secondly, we must estimate the carbon beta using a multi-factor model. While @Gorgen-2019 considered that the carbon beta is constant, we propose a time-varying estimation model to assess the dynamics of the carbon risk. Moreover, we test several specifications of the BMG factor to understand which climate change-related dimensions are priced in by the stock market. In the second part of the article, we focus on the carbon risk management of investment portfolios. First, we analyze how carbon risk impacts the construction of a minimum variance portfolio. As the goal of this portfolio is to reduce unrewarded financial risks of an investment, incorporating the carbon risk into this approach fulfils this objective. Second, we propose a new framework for building enhanced index portfolios with a lower exposure to carbon risk than capitalization-weighted stock indices. Finally, we explore how carbon sensitivities can improve the robustness of factor" +"---\nabstract: 'We solve the one-dimensional time-independent Klein-Gordon equation in presence of a smooth potential well. The bound state solutions are given in terms of the Whittaker $M_{\\kappa,\\mu}(x)$ function, and the antiparticle bound state is discussed in terms of potential parameters.'\naddress:\n- 'School of Physical Sciences and Nanotechnology, Yachay Tech University, 100119 Urcuqu\u00ed, Ecuador'\n- 'School of Physical Sciences and Nanotechnology, Yachay Tech University, 100119 Urcuqu\u00ed, Ecuador'\nauthor:\n- Eduardo L\u00f3pez\n- 'Clara Rojas [^1]'\nbibliography:\n- 'well\\_cusp.bib'\ntitle: 'Bound states of a Klein-Gordon particle in presence of a smooth potential well'\n---\n\nIntroduction\n============\n\nThe discussion of the overcritial behavior of bosons requires a full understanding of the single particle spectrum. For short range potentials, the solutions of the Klein-Gordon equation can exhibit spontaneous production of antiparticles as the strength of an external potential reaches certain value $V_0$ [@rafelski:1978]. In $1940$, Schiff, Snyder and Weinberg [@schiff:1940] carried out one of the earliest investigations of the solution of the Klein-Gordon equation with a strong external potential. They solved the problem of the square well potential and discovered that there is a critical point $V_{cr}$ where the bound antiparticle mode appears to coalesce with the bound particle. In $1979$, Bawin" +"---\nabstract: 'Phase transitions have recently been formulated in the time domain of quantum many-body systems, a phenomenon dubbed dynamical quantum phase transitions (DPTs), whose phenomenology is often divided in two types. One refers to distinct phases according to long-time averaged order parameters, while the other is focused on the non-analytical behavior emerging in the rate function of the Loschmidt echo. Here we show that such DPTs can be found in systems with few degrees of freedom, i.e. they can take place without resorting to the traditional thermodynamic limit. We illustrate this by showing the existence of the two types of DPTs in a quantum Rabi model \u2014a system involving a spin-$\\frac{1}{2}$ and a bosonic mode. The dynamical criticality appears in the limit of an infinitely large ratio of the spin frequency with respect to the bosonic one. We determine its dynamical phase diagram and study the long-time averaged order parameters, whose semiclassical approximation yields a jump at the transition point. We find the critical times at which the rate function becomes non-analytical, showing its associated critical exponent as well as the corrections introduced by a finite frequency ratio. Our results open the door for the study of DPTs without" +"---\nabstract: '$\\rm Sr_2IrO_4$ is an archetypal spin-orbit-coupled Mott insulator and has been extensively studied in part because of a wide range of predicted novel states. Limited experimental characterization of these states thus far brings to light the extraordinary susceptibility of the physical properties to the lattice, particularly, the Ir-O-Ir bond angle. Here, we report a newly observed microscopic rotation of the IrO$_6$ octahedra below 50\u00a0K measured by single crystal neutron diffraction. This sharp lattice anomaly provides keys to understanding the anomalous low-temperature physics and a direct confirmation of a crucial role that the Ir-O-Ir bond angle plays in determining the ground state. Indeed, as also demonstrated in this study, applied electric current readily weakens the antiferromagnetic order via the straightening of the Ir-O-Ir bond angle, highlighting that even slight change in the local structure can disproportionately affect the physical properties in the spin-orbit-coupled system.'\nauthor:\n- Feng Ye\n- Christina Hoffmann\n- Wei Tian\n- Hengdi\u00a0Zhao\n- 'G.\u00a0Cao'\ntitle: 'Pseudospin-lattice coupling and electric control of the square-lattice iridate $\\rm Sr_2IrO_4$'\n---\n\nStrong spin-orbit interactions (SOI), along with appreciable Coulomb interactions, crystalline electric field, and large orbital hybridization in 5$d$-electron based oxides has produced a wide range" +"---\nabstract: 'In modern transportation systems, an enormous amount of traffic data is generated every day. This has led to rapid progress in short-term traffic prediction (STTP), in which deep learning methods have recently been applied. In traffic networks with complex spatiotemporal relationships, deep neural networks (DNNs) often perform well because they are capable of automatically extracting the most important features and patterns. In this study, we survey recent STTP studies applying deep networks from four perspectives. 1) We summarize input data representation methods according to the number and type of spatial and temporal dependencies involved. 2) We briefly explain a wide range of DNN techniques from the earliest networks, including Restricted Boltzmann Machines, to the most recent, including graph-based and meta-learning networks. 3) We summarize previous STTP studies in terms of the type of DNN techniques, application area, dataset and code availability, and the type of the represented spatiotemporal dependencies. 4) We compile public traffic datasets that are popular and can be used as the standard benchmarks. Finally, we suggest challenging issues and possible future research directions in STTP.'\nauthor:\n- |\n Kyungeun Lee$^{1}$, Moonjung Eo$^{1}$, Euna Jung$^{1}$, Yoonjin Yoon$^{2}$, and Wonjong Rhee$^{1, \\ast}$\\\n $^{1}$Department of Intelligence and Information," +"---\nabstract: 'Learning-to-rank has been intensively studied and has shown significantly increasing values in a wide range of domains, such as *web search*, *recommender systems*, *dialogue systems*, *machine translation*, and even *computational biology*, to name a few. The performance of learning-to-rank methods is commonly evaluated using rank-sensitive metrics, such as *average precision* (AP) and *normalized discounted cumulative gain* (nDCG). Unfortunately, how to effectively optimize rank-sensitive objectives is far from being resolved, which has been an open problem since the dawn of learning-to-rank over a decade ago. In this paper, we introduce a simple yet effective framework for directly optimizing information retrieval (IR) metrics. Specifically, we propose a novel *twin-sigmoid* function for deriving the *exact rank positions* of documents during the optimization process *instead of using approximated rank positions or relying on the traditional sorting algorithms* (e.g., *Quicksort* [@QuickSort]). Thanks to this, the rank positions are differentiable, enabling us to reformulate *the widely used IR metrics* as differentiable ones and directly optimize them based on neural networks. Furthermore, by carrying out an in-depth analysis of the gradients, we pinpoint the potential limitations inherent with direct optimization of IR metrics based on the vanilla sigmoid. To break the limitations, we propose different" +"---\nauthor:\n- 'Marco Mignoli [^1]'\n- Roberto Gilli\n- Roberto Decarli\n- Eros Vanzella\n- Barbara Balmaverde\n- Nico Cappelluti\n- |\n \\\n Letizia P. Cassar\u00e0\n- Andrea Comastri\n- Felice Cusano\n- Kazushi Iwasawa\n- Stefano Marchesi\n- Isabella Prandoni\n- Cristian Vignali\n- Fabio Vito\n- Giovanni Zamorani\n- Marco Chiaberge\n- Colin Norman\ndate: 'Received / Accepted '\ntitle: 'The web of the Giant: spectroscopic confirmation of a Large Scale Structure around the z=6.31 quasar SDSS\u00a0J1030+0524'\n---\n\nWe report on the spectroscopic confirmation of a large scale structure around the luminous, [[*z*]{}]{}=6.31 QSO SDSS\u00a0J1030+0524, that is powered by a billion solar mass black hole. The structure is populated by at least six members, four Lyman Break Galaxies (LBGs) and two Lyman Alpha Emitters (LAEs). The four LBGs have been identified among a sample of 21 i-band dropouts with z~AB~$<\\,$25.5 selected up to projected separations of 5 physical Mpc (15 arcmin) from the QSO. Their redshifts have been determined through up to 8hr-long multi-object spectroscopic observations at 8-10m class telescopes. The two LAEs have been identified in a 6hr VLT/MUSE observation centered on the QSO. The redshifts of the six galaxies cover the range 6.129-6.355." +"---\nauthor:\n- 'A.\u00a0B.\u00a0Justesen[^1]'\n- 'S.\u00a0Albrecht'\nbibliography:\n- 'combined.bib'\ndate: 'Received ; accepted '\ntitle: 'The spin-orbit alignment of visual binaries'\n---\n\n[The angle between the stellar spin-axis and the orbital plane of a stellar or planetary companion has important implications for the formation and evolution of such systems. A study by @1994AJ....107..306H found that binaries with separations $a\\lesssim 30\\,$au are preferentially aligned while binaries on wider orbits are frequently misaligned. ]{} [We aim to test the robustness of the @1994AJ....107..306H results by reanalysing the sample of visual binaries with measured rotation periods using independently derived stellar parameters and a Bayesian formalism. ]{} [Our analysis is based on a combination of data from @1994AJ....107..306H and newly obtained spectroscopic data from the Hertzsprung SONG telescope, combined with astrometric data from *Gaia* DR2 and the Washington Double Star Catalog. We combine measurements of stellar radii and rotation periods to obtain stellar rotational velocities $v$. Rotational velocities $v$ are combined with measurements of projected rotational velocities $v\\sin i$ to derive posterior probability distributions of stellar inclination angles $i$. We determine line-of-sight projected spin-orbit angles by comparing stellar inclination angles with astrometric orbital inclination angles. ]{} [We find that the precision" +"---\nauthor:\n- \nbibliography:\n- 'references-final.bib'\ndate: 'June 14, 2021'\ntitle: 'Exploring British Accents: Modelling the Trap\u2013Bath Split with Functional Data Analysis'\n---\n\n**Abstract\\\n**\n\nThe sound of our speech is influenced by the places we come from. Great Britain contains a wide variety of distinctive accents which are of interest to linguistics. In particular, the \u201ca\u201d vowel in words like \u201cclass\u201d is pronounced differently in the North and the South. Speech recordings of this vowel can be represented as formant curves or as mel-frequency cepstral coefficient curves. Functional data analysis and generalised additive models offer techniques to model the variation in these curves. Our first aim is to model the difference between typical Northern and Southern vowels /\u00e6/ and //, by training two classifiers on the North-South Class Vowels dataset. Our second aim is to visualise geographical variation of accents in Great Britain. For this we use speech recordings from a second dataset, the British National Corpus (BNC) audio edition. The trained models are used to predict the accent of speakers in the BNC, and then we model the geographical patterns in these predictions using a soap film smoother. This work demonstrates a flexible and interpretable approach to modelling" +"---\nauthor:\n- Miriam Redi\n- Martin Gerlach\n- Isaac Johnson\n- Jonathan Morgan\n- Leila Zia\nbibliography:\n- 'Main.bib'\ntitle: 'A Taxonomy of Knowledge Gaps for Wikimedia Projects (Second Draft)'\n---\n\nExecutive Summary {#executive-summary .unnumbered}\n=================\n\nIn January 2019, prompted by the Wikimedia Movement\u2019s 2030 strategic direction\u00a0[@strategy], the Research team at the Wikimedia Foundation[^1] identified the need to develop a *knowledge gaps index*\u2014a composite index to support the decision makers across the Wikimedia movement by providing: a framework to encourage structured and targeted brainstorming discussions; data on the state of the knowledge gaps across the Wikimedia projects that can inform decision making and assist with measuring the long term impact of large scale initiatives in the Movement.\n\nAfter its first release in July 2020, the Research team has developed the second complete draft of a taxonomy of knowledge gaps for the Wikimedia projects, as the first step towards building the knowledge gap index. We studied more than 250 references by scholars, researchers, practitioners, community members and affiliates\u2014exposing evidence of knowledge gaps in readership, contributorship, and content of Wikimedia projects. We elaborated the findings and compiled the taxonomy of knowledge gaps in this paper, where we describe, group and" +"---\nabstract: 'The generalized propensity score (GPS) is an extension of the propensity score for use with quantitative or continuous exposures (e.g., dose of medication or years of education). Current GPS methods allow estimation of the dose-response relationship between a single continuous exposure and an outcome. However, in many real-world settings, there are multiple exposures occurring simultaneously that could be causally related to the outcome. We propose a multivariate GPS method (mvGPS) that allows estimation of a dose-response surface that relates the joint distribution of multiple continuous exposure variables to an outcome. The method involves generating weights under a multivariate normality assumption on the exposure variables. Focusing on scenarios with two exposure variables, we show via simulation that the mvGPS method can achieve balance across sets of confounders that may differ for different exposure variables and reduces bias of the treatment effect estimates under a variety of data generating scenarios. We apply the mvGPS method to an analysis of the joint effect of two types of intervention strategies to reduce childhood obesity rates.'\nauthor:\n- |\n Justin R.\u00a0Williams\\\n Department of Biostatistics\\\n University of California, Los Angeles\\\n Los Angeles, CA, USA, 90049 Catherine M.\u00a0Crespi\\\n Department of Biostatistics\\\n University of" +"---\nabstract: 'A novel square equal-area map projection is proposed. The projection combines closed-form forward and inverse solutions with relatively low angular distortion and minimal cusps, a combination of properties not manifested by any previously published square equal-area projection. Thus, the new projection has lower angular distortion than any previously published square equal-area projection with a closed-form solution. Utilizing a quincuncial arrangement, the new projection places the north pole at the center of the square and divides the south pole between its four corners; the projection can be seamlessly tiled. The existence of closed-form solutions makes the projection suitable for real-time visualization applications, both in cartography and in other areas, such as for the display of panoramic images.'\nauthor:\n- 'Matthew\u00a0A. Petroff'\nbibliography:\n- 'paper.bib'\ntitle: |\n A Square Equal-area Map Projection with\\\n Low Angular Distortion, Minimal Cusps, and Closed-form Solutions\n---\n\n<ccs2012> <concept> <concept\\_id>10003120.10003145.10003147.10010887</concept\\_id> <concept\\_desc>Human-centered computing\u00a0Geographic visualization</concept\\_desc> <concept\\_significance>500</concept\\_significance> </concept> <concept> <concept\\_id>10010147.10010371.10010382.10010383</concept\\_id> <concept\\_desc>Computing methodologies\u00a0Image processing</concept\\_desc> <concept\\_significance>100</concept\\_significance> </concept> </ccs2012>\n\nIntroduction\n============\n\nAlthough there is a plenitude of map projections [@Snyder1987], there has been relatively little work done on square equal-area projections. As noted by @Gringorten1972, a square aspect ratio is useful for printed atlases, since it allows for" +"---\nbibliography:\n- 'bibfile.bib'\n---\n\nIntroduction\n============\n\nPolling systems are queueing networks in which multiple queues are attended by a single server that switches between the different buffers according to a given switching policy. This class of systems is used to model a large variety of systems in practice, such as communication, production, transportation, healthcare and computer systems; see, e.g., [@levy1990polling; @takagi1991application; @federgruen1996stochastic; @takagi1997queueing; @cicin2001application; @boon2011applications; @borst2018polling].\n\nIn most of the literature, rigorous analysis of stochastic polling systems under a given switching policy is carried out by deriving multi-dimensional transforms for the queue process at the polling epochs, from which the distribution of the queue can, in principle, be computed by inverting the transform. However, as explained in [@choudhury1996computing], those transforms are not available directly, and are instead expressed implicitly in a recursive form. Even more fundamental is the fact that transforms can only be computed for a class of policies that satisfy a certain branching-type property [@resing1993polling]. Nevertheless, algorithms to invert the transforms, when they can be derived, are known, and can be used to numerically compute distributions and moments.\n\nArguably, the most important moments are the first two, but both the question of whether higher moments exist, and" +"---\nabstract: 'In zones of loose sand, wind-blown sand dunes emerge due the linear instability of a flat sedimentary bed. This instability has been studied in experiments and numerical models but rarely in the field, due to the large time and length scales involved. We examine dune formation at the upwind margin of the White Sands Dune Field in New Mexico (USA), using 4 years of lidar topographic data to follow the spatial and temporal development of incipient dunes. Data quantify dune wavelength, growth rate, and propagation velocity and also the characteristic length scale associated with the growth process. We show that all these measurements are in quantitative agreement with predictions from linear stability analysis. This validation makes it possible to use the theory to reliably interpret dune-pattern characteristics and provide quantitative constraints on associated wind regimes and sediment properties, where direct local measurements are not available or feasible.'\nbibliography:\n- 'biblio.bib'\ntitle: Spatial and temporal development of incipient dunes\n---\n\n[An edited version of this paper was published by AGU. Copyright 2020 American Geophysical Union: Gadal, C., Narteau, C., Ewing, R. C., Gunn, A., Jerolmack, D., Andreotti, B., & Claudin, P. (2020). Spatial and temporal development of incipient dunes." +"---\nabstract: 'The results of an optical search for supernova remnants (SNRs) in the nearby irregular galaxy NGC 2366 are presented. We took interference filter images and collected spectral data in three epochs with the f/7.7 1.5 m Russian Turkish Telescope (RTT150) at T\u00dcB\u0130TAK National Observatory (TUG) located in Antalya, Turkey. The continuum-subtracted H$\\alpha$ and continuum-subtracted \\[S[ii]{}\\]$\\lambda \\lambda$6716, 6731 images and their ratios were used for the identification of SNRs. With \\[S[ii]{}\\]/H$\\alpha$ $\\geq$ 0.4 criteria, four possible SNR candidates were identified in NGC 2366 with \\[S[ii]{}\\]/H$\\alpha$ ratios of $\\sim$(0.68, 0.57, 0.55 and 0.75), H$\\alpha$ intensities of $\\sim$(2.10, 0.36, 0.14, 0.11)$\\times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$ \\[S[ii]{}\\]$\\lambda$6716/$\\lambda$6731 average flux ratios of $\\sim$(1.01 and 1.04), electron densities of $N_{\\rm e}$ $\\sim$(582 and 513) cm$^{-3}$ and \\[O[iii]{}\\] $\\lambda$5007/H$\\beta$ $\\lambda$4861 $\\sim$(3.6 and 2.6) line ratio values are obtained for two of the SNR candidates. A shock velocity $V_{\\rm s}$ of 80 $\\leq$ $V_{\\rm s}$ $\\leq$ 100 km s$^{-1}$ is reported. The spectral parameters are obtained for the first time for these possible SNR candidates. The locations of the four SNRs obtained here are found to be consistent with optical and radio results reported so far. One of the sources categorised earlier by *XMM-Newton* observations as" +"---\nabstract: 'A common strategy in variational image recovery is utilizing the nonlocal self-similarity (NSS) property, when designing energy functionals. One such contribution is nonlocal structure tensor total variation (NLSTV), which lies at the core of this study. This paper is concerned with boosting the NLSTV regularization term through the use of directional priors. More specifically, NLSTV is leveraged so that, at each image point, it gains more sensitivity in the direction that is presumed to have the minimum local variation. The actual difficulty here is capturing this directional information from the corrupted image. In this regard, we propose a method that employs anisotropic Gaussian kernels to estimate directional features to be later used by our proposed model. The experiments validate that our entire two-stage framework achieves better results than the NLSTV model and two other competing local models, in terms of visual and quantitative evaluation.'\nauthor:\n- 'Ezgi\u00a0Demircan-Tureyen'\n- 'Mustafa E. Kamasak'\nbibliography:\n- 'template.bib'\ndate: 'Received: date / Accepted: date'\ntitle: 'Nonlocal Adaptive Direction-Guided Structure Tensor Total Variation For Image Recovery [^1] '\n---\n\nIntroduction {#intro}\n============\n\nThe general inverse imaging problems seek the recovery of the underlying image $\\textbf{f} \\in \\mathbb{R}^{NC}$ (assuming that each $N$-pixel channel" +"---\nauthor:\n- \n- |\n Tianyu Zhan$^{1, \\dagger}$, Yiwang Zhou$^2$, Ziqian Geng$^1$, Yihua Gu$^1$, Jian Kang$^3$,\\\n Li Wang$^1$, Xiaohong Huang$^1$ and Elizabeth H. Slate$^4$\nbibliography:\n- './HBM\\_ref.bib'\ndate: |\n $^1$ Data and Statistical Sciences, AbbVie Inc., North Chicago, IL, USA\\\n $^2$ Department of Biostatistics, St. Jude Children\u2019s Research Hospital, Memphis, TN, USA\\\n $^3$ Department of Biostatistics, University of Michigan, Ann Arbor, MI, USA\\\n $^4$ Department of Statistics, Florida State University, Tallahassee, FL, USA\\\n $^\\dagger$ Corresponding author: Tianyu Zhan, 1 Waukegan Road, North Chicago, IL 60064, USA. [tianyu.zhan.stats@gmail.com]{} \ntitle: Deep Historical Borrowing Framework to Prospectively and Simultaneously Synthesize Control Information in Confirmatory Clinical Trials with Multiple Endpoints\n---\n\nAbstract {#abstract .unnumbered}\n========\n\nIn current clinical trial development, historical information is receiving more attention as it provides utility beyond sample size calculation. Meta-analytic-predictive (MAP) priors and robust MAP priors have been proposed for prospectively borrowing historical data on a single endpoint. To simultaneously synthesize control information from multiple endpoints in confirmatory clinical trials, we propose to approximate posterior probabilities from a Bayesian hierarchical model and estimate critical values by deep learning to construct pre-specified strategies for hypothesis testing. This feature is important to ensure study integrity by establishing prospective decision functions before" +"---\nabstract: 'The problem of bound entanglement detection is a challenging aspect of quantum information theory for higher dimensional systems. Here, we propose an indecomposable positive map for two-qutrit systems, which is shown to detect a class of positive partial transposed (PPT) states. A corresponding witness operator is constructed and shown to be weakly optimal and locally implementable. Further, we perform a structural physical approximation of the indecomposable map to make it a completely positive one, and find a new PPT-entangled state which is not detectable by certain other well-known entanglement detection criteria.'\nauthor:\n- Bihalan Bhattacharya\n- Suchetana Goswami\n- Rounak Mundra\n- Nirman Ganguly\n- Indranil Chakrabarty\n- Samyadeb Bhattacharya\n- 'A. S. Majumdar'\nbibliography:\n- 'PPT\\_ref.bib'\ntitle: 'Generating and detecting bound entanglement in two-qutrits using a family of indecomposable positive maps'\n---\n\nIntroduction\n============\n\nThe inseparable feature of quantum states [@EPR_35; @S_35; @R_89; @B_64] plays the most crucial role in various information processing tasks [@BW_92; @BBCJPW_93; @BCWSW_12; @AMP_12]. Entanglement is the central feature of the theory of quantum information science and the detection of entanglement in an arbitrary quantum system is considered to be one of the most fundamental aspects of the subject. The most effective way" +"---\nabstract: 'Neural codes, represented as collections of binary strings called codewords, are used to encode neural activity. A code is called convex if its codewords are represented as an arrangement of convex open sets in Euclidean space. Previous work has focused on addressing the question: how can we tell when a neural code is convex? Giusti and Itskov identified a local obstruction and proved that convex neural codes have no local obstructions. The converse is true for codes on up to four neurons, but false in general. Nevertheless, we prove that this converse holds for codes with up to three maximal codewords, and moreover the minimal embedding dimension of such codes is at most two. 0.1cm **Keywords:** neural code, convex, simplicial complex, link, contractible'\naddress:\n- $^1$Lafayette College\n- '$^2$Texas A&M University'\n- $^3$University of Portland\nauthor:\n- Katherine Johnston$^1$\n- Anne Shiu$^2$\n- Clare Spinner$^3$\nbibliography:\n- 'mybibliography.bib'\ndate: 'August 30, 2020'\ntitle: |\n Neural codes with three maximal codewords:\\\n convexity and minimal embedding dimension\n---\n\nIntroduction\n============\n\nThe brain encodes spatial structure through neurons in the hippocampus known as *place cells*, which are associated with regions of space called *receptive fields*. Place cells fire at a high" +"---\nabstract: 'Document-level relation extraction is a challenging task which requires reasoning over multiple sentences in order to predict relations in a document. In this paper, we propose a joint training framework [*E2GRE*]{} (Entity and Evidence Guided Relation Extraction) for this task. First, we introduce entity-guided sequences as inputs to a pretrained language model (e.g. BERT, RoBERTa). These entity-guided sequences help a pretrained language model (LM) to focus on areas of the document related to the entity. Secondly, we guide the fine-tuning of the pretrained language model by using its internal attention probabilities as additional features for evidence prediction. Our new approach encourages the pretrained language model to focus on the entities and supporting/evidence sentences. We evaluate our [*E2GRE*]{} approach on DocRED, a recently released large-scale dataset for relation extraction. Our approach is able to achieve state-of-the-art results on the public leaderboard across all metrics, showing that our [*E2GRE*]{} is both effective and synergistic on relation extraction and evidence prediction.'\nauthor:\n- |\n Kevin Huang$^{\\dagger}$, Guangtao Wang$^{\\dagger}$, Tengyu Ma$^{\\ddagger}$, Jing Huang$^{\\dagger}$\\\n JD AI Research$^{\\dagger}$, Stanford University$^{\\ddagger}$\\\n `{kevin.huang3, guangtao.wang, jing.huang}@jd.com`\\\n `tengyuma@stanford.edu`\nbibliography:\n- 'emnlp2020.bib'\ntitle: Entity and Evidence Guided Relation Extraction for DocRED\n---\n\nIntroduction\n============\n\nRelation Extraction (RE), the problem" +"---\nabstract: 'In this paper, an intelligent reflecting surface (IRS) is deployed to assist the terahertz (THz) communications. The molecular absorption causes path loss peaks to appear in the THz frequency band, and the fading peak is greatly affected by the transmission distance. In this paper, we aim to maximize the sum rate with individual rate constraints, in which the IRS location, IRS phase shift, the allocation of sub-bands of the THz spectrum, and power control for UEs are jointly optimized. For the special case of a single user equipment (UE) with a single sub-band, the globally optimal solution is provided. For the general case with multiple UEs, the block coordinate searching (BCS) based algorithm is proposed to solve the non-convex problem. Simulation results show that the proposed scheme can significantly enhance system performance.'\nauthor:\n- '[^1] [^2] [^3] [^4]'\nbibliography:\n- 'Reference.bib'\ntitle: Sum Rate Maximization for Intelligent Reflecting Surface Assisted Terahertz Communications\n---\n\nIntelligent reflecting surface (IRS), Terahertz (THz) communication, Reconfigurable intelligent surface (RIS).\n\nIntroduction\n============\n\nThe terahertz (THz) band wireless transmission has been envisioned as a promising solution to meet the ultra-high data rate requirements of the emerging applications such as the virtual reality (VR) service. However," +"---\naddress:\n- ', , '\n- ', , '\nauthor:\n- 'Dai Feng\\*'\n- Richard Baumgartner\nbibliography:\n- 'wileyNJD-AMS.bib'\ntitle: 'Random Forest (RF) Kernel for Regression, Classification and Survival [^1]'\n---\n\nIntroduction {#sec1}\n============\n\nRandom forest (RF) has been a successful and time-proven statistical machine learning method [@biau2016]. At first, RF was developed for classification and regression [@breiman2000]. Recently, it has been extended and adopted for additional types of targets such as time-to-event or ordered outcomes [@ishwaran2019]. RF belongs to the ensemble methods, where \u201cbase\u201d tree learners are grown on bootstrapped samples of the training data set and then their predictions are aggregated to yield a final prediction. RF was conceived originally under the frequentist framework. However, Bayesian counterparts e.g. Mondrian random forest were also proposed [@balog2016].\n\nIn Ref [@breiman2000], Breiman pointed out an alternative interpretation of the RF as a kernel generator. The $n \\times n$ proximity matrix (where $n$ is the number of samples) naturally ensuing from the construction of the RF plays here a key role. Each entry of the RF proximity matrix is an estimate of the probability that two points end up in the same terminal node [@breiman2000]. It is a symmetric positive-semidefinite matrix" +"---\nabstract: |\n Every oriented closed geodesic on the modular surface has a canonically associated knot in its unit tangent bundle coming from the periodic orbit of the geodesic flow. We study the volume of the associated knot complement with respect to its unique complete hyperbolic metric. We show that there exist sequences of closed geodesics for which this volume is bounded linearly in terms of the period of the geodesic\u2019s continued fraction expansion. Consequently, we give a volume\u2019s upper bound for some sequences of Lorenz knots complements, linearly in terms of the corresponding braid index.\n\n Also, for any punctured hyperbolic surface we give volume\u2019s bounds for the canonical lift complement relative to some sequences of sets of closed geodesics in terms of the geodesics length.\nauthor:\n- 'JOSE ANDRES RODRIGUEZ-MIGUELES'\ntitle: Periods of continued fractions and volumes of modular knots complements \n---\n\nIntroduction\n============\n\nLet $\\Sigma$ be a complete, orientable hyperbolic surface or $2$-orbifold of finite area. An oriented closed geodesic $\\gamma$ on $\\Sigma$ has a canonical lift $\\widehat\\gamma$ in its unit tangent bundle $T^1\\Sigma,$ namely the corresponding periodic orbit of the geodesic flow. Let $M_{\\widehat\\gamma}$ denote the complement of a regular neighborhood of $\\widehat\\gamma$ in $T^1\\Sigma.$ As a" +"---\nauthor:\n- 'Yanan\u00a0Sun,\u00a0 \u00a0Xian\u00a0Sun, \u00a0Yuhan\u00a0Fang, and Gary\u00a0G.\u00a0Yen,\u00a0 [^1] [^2] [^3]'\ntitle: A Novel Training Protocol for Performance Predictors of Evolutionary Neural Architecture Search Algorithms\n---\n\nIntroduction {#section_introduction}\n============\n\nNeural Networks (DNNs) are becoming the dominant algorithm of machine learning\u00a0[@lecun2015deep], largely owing to their superiority in solving challenging real-world applications\u00a0[@hinton2006reducing; @krizhevsky2012imagenet]. Generally, the performance of DNNs relies on two deciding factors, the architectures of the DNNs and the weights associated with the architecture. The performance of a DNN in solving the corresponding problem can be promising, only when its architecture and the weights achieve the optimum combination simultaneously. Commonly, when the architecture of a DNN is determined, the optimal weights can be obtained through formulizing the loss as a continuous function, and then the exact optimization algorithms are employed for solving. In practice, the gradient-based optimization algorithms are the most popular one in addressing the loss function, although they cannot theoretically guarantee the global optimum\u00a0[@kearney1987optical]. On the other hand, obtaining the optimal architectures is not a trivial task because the architectures cannot be directly optimized as the weights do. In practice, most, if not all, prevalent state-of-the-art DNN architectures are manually designed" +"---\nabstract: 'With recent developments of convolutional neural networks, deep learning for 3D point clouds has shown significant progress in various 3D scene understanding tasks, e.g., object recognition, semantic segmentation. In a safety-critical environment, it is however not well understood how such deep learning models are vulnerable to adversarial examples. In this work, we explore adversarial attacks for point cloud-based neural networks. We propose a unified formulation for adversarial point cloud generation that can generalise two different attack strategies. Our method generates adversarial examples by attacking the classification ability of point cloud-based networks while considering the perceptibility of the examples and ensuring the minimal level of point manipulations. Experimental results show that our method achieves the state-of-the-art performance with higher than 89% and 90% of attack success rate on synthetic and real-world data respectively, while manipulating only about 4% of the total points.'\nauthor:\n- |\n Jaeyeon Kim$^1$ Binh-Son Hua$^{2,3}$ Duc Thanh Nguyen$^4$ Sai-Kit Yeung$^{1}$\\\n $^1$Hong Kong University of Science and Technology $^2$VinAI Research, Vietnam\\\n $^4$Deakin University $^3$VinUniversity, Vietnam\\\nbibliography:\n- 'egbib.bib'\ntitle: Minimal Adversarial Examples for Deep Learning on 3D Point Clouds\n---\n\nIntroduction\n============\n\nDeep learning has shown great potentials in solving a wide spectrum of computer vision" +"---\nauthor:\n- 'Safwan Alfattani, Wael Jaafar, Yassine Hmamouche, Halim Yanikomeroglu, and Abbas Yonga\u00e7oglu [^1] [^2] [^3] [^4] [^5]'\nbibliography:\n- 'IEEEabrv.bib'\n- 'Final\\_accepted\\_version.bib'\ntitle: Link Budget Analysis for Reconfigurable Smart Surfaces in Aerial Platforms\n---\n\nIntroduction\n============\n\nAs the fifth generation (5G) of wireless systems being actively deployed, researchers in the wireless community started investigating new technologies and innovative solutions to tackle the challenges and fulfill the demands of One of the main With the inherent limitations of terrestrial environments, non-terrestrial networks are envisioned as an enabling technology for ubiquitous connectivity in future wireless communications. Moreover, the standardization efforts of the Third Generation Partnership Project (3GPP) aiming to utilize aerial platforms for 5G demonstrated by the standardization documents TR 38.811 [@3gpp2017Technical], TR 22.829 [@3gpp2017Technical_2], and TS 22.125 [@3gpp2017Technical_3]. Furthermore, several commercial projects are either in their initial phases of deployment or under development, which aim to design different types of aerial platforms capable of supporting wireless communications. Such projects include the Starlink LEO constellation by SpaceX [@starlink], the Stratobus HAPS by Thales [@thales], and the Nokia Drone Networks [@Nokia]. Nevertheless, aerial platforms and their current size, weight, and power (SWAP) limitations need to be further improved.\n\nOn the other" +"---\nabstract: 'We introduce a hybrid plasmonic-photonic cavity setup that can be used to induce and control long-distance heat transfer between molecular systems through optomechanical interactions. The structure consists of two separated plasmonic nanoantennas coupled to a dielectric cavity. The hybrid modes of this resonator can combine the large optomechanical coupling of the sub-wavelength plasmonic modes with the large quality factor and delocalized character of the cavity mode that extends over a large distance ($\\sim\\mu$m). We show that this can lead to effective long-range heat transport between molecular vibrations that can be actively controlled through an external driving laser.'\nauthor:\n- 'S. Mahmoud Ashrafi'\n- 'R. Malekfar'\n- 'A. R. Bahrampour'\n- Johannes Feist\nbibliography:\n- 'references.bib'\ntitle: 'Long-distance heat transfer between molecular systems through a hybrid plasmonic-photonic nanoresonator'\n---\n\nIntroduction\n============\n\nEnergy transfer between quantum emitters (quantum dots, molecules, atoms, \u2026) is a process of fundamental importance for a large range of phenomena in quantum information, quantum thermodynamics, quantum biology, photosynthesis, solar cells, etc.\u00a0[@Nagali2009; @Northup2014; @Nalbach2010; @Dubi2011; @Katz2016; @Lee2007; @Scholes2011; @High2008; @Menke2013]. One powerful strategy to modify these processes is by coupling the emitters with an electromagnetic mode and mediating transport through photon absorption and emission\u00a0[@Gerry2004; @Messina2012;" +"---\nauthor:\n- Andr\u00e9 David\n- Giampiero Passarino\nbibliography:\n- 'reuse2.bib'\ntitle: Use and reuse of SMEFT\n---\n\nIntroduction \\[Intro\\]\n======================\n\nThe SMEFT\u00a0[@Passarino:2016pzb; @Brivio:2017vri; @Passarino:2019yjx] is a framework that consistently extends the standard model (SM) and allows to capture the effects of beyond-standard-model (BSM) physics in a reasonably general fashion.\n\nIn order to define the SM effective[-]{}field[-]{}theory (SMEFT) we start by considering a broader scenario: there is a \u201cstandard\u201d theory, $X$, described by a Lagrangian based on a symmetry group G. The definition of the EFT extension of $X$ (say, XEFT) requires a circumstantial description for which we need to consider $X^{\\prime}$, the ultraviolet (UV) completion of $X$ or the next theory in a tower of EFTs.\n\nThe parameters of the \u201cstandard\u201d $X$ theory are always measured to within some error. Having uncertainties in the parameters leads to hypothesizing a higher structure where the SM Higgs boson mixes with additional scalars. Given the most recent results\u00a0[@1798909; @Sirunyan:2018hoz; @Sirunyan:2018kst; @Aaboud:2018urx; @Aaboud:2018zhk] we have to admit that this amount of mixing is observed to be rather constrained, especially because data continue to push the Higgs couplings towards the SM-like limits.\n\nThere are two main, non-exclusive, paths in going from $X$" +"---\nabstract: 'A novel random walk based model for inter-core cross-talk (IC-XT) characterization of multi-core fibres capable of accurately representing both time-domain distribution and frequency-domain representation of experimental IC-XT has been proposed. It was demonstrated that this model is a generalization of the most widely used model in literature to which it will converge when the number of samples and measurement time-window tend to infinity. In addition, this model is consistent with statistical analysis such as short term average crosstalk (STAXT), keeping the same convergence properties and it showed to be almost independent to time-window. To validate this model, a new type of characterization of the IC-XT in the dB domain (based on a pseudo random walk) has been proposed and the statistical properties of its step distribution have been evaluated. The performed analysis showed that this characterization is capable of fitting every type of signal source with an accuracy above 99.3%. It also proved to be very robust to time-window length, temperature and other signal properties such as symbol rate and pseudo-random bit stream (PRBS) length. The obtained results suggest that the model was able to communicate most of the relevant information using a short observation time, making it" +"---\nabstract: |\n We present benchmark integrated and differential cross-sections for electron collisions with H$_2$ using two different theoretical approaches, namely, the R-matrix and molecular convergent close-coupling (MCCC). This is similar to comparative studies conducted on electron-atom collisions for H, He and Mg. Electron impact excitation to the $b \\ ^3\\Sigma_u^+$, $a \\ ^3\\Sigma_g^+$, $B \\\n ^1\\Sigma_u^+$, $c \\ ^3\\Pi_u$, $EF \\ ^1\\Sigma_g^+$, $C \\ ^1\\Pi_u$, $e \\ ^3\\Sigma_u^+$, $h \\\n ^3\\Sigma_g^+$, $B' \\ ^1\\Sigma_u^+$ and $d \\ ^3\\Pi_u$ excited electronic states are considered. Calculations are presented in both the fixed nuclei and adiabatic nuclei approximations, where the latter is shown only for the $b \\ ^3\\Sigma_u^+$ state. Good agreement is found for all transitions presented. Where available, we compare with existing experimental and recommended data.\naddress: |\n $^1$Department of Physics and Astronomy, University College London, London WC1E 6BT, United Kingdom.\\\n $^2$Institute of Theoretical Physics, Faculty of Mathematics and Physics, Charles University, V Hole\u0161ovi\u010dk\u00e1ch 2, 180 00 Prague 8, Czech Republic.\\\n $^3$Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545, USA.\\\n $^4$Curtin Institute for Computation and Department of Physics and Astronomy, Curtin University, Perth, WA 6102, Australia. \nauthor:\n- 'T. Meltzer$^1$, J. Tennyson$^1$, Z. Ma[\u0161]{}[\u00ed]{}n$^2$, M. C. Zammit$^3$, L." +"---\nabstract: 'In this paper, we develop a resource allocation framework to optimize the downlink transmission of a backhaul-aware multi-cell cognitive radio network (CRN) which is enabled with multi-carrier non-orthogonal multiple access (MC-NOMA). The considered CRN is composed of a single macro base station (MBS) and multiple small BSs (SBSs) that are referred to as the primary and secondary tiers, respectively. For the primary tier, we consider orthogonal frequency division multiple access (OFDMA) scheme and also Quality of Service (QoS) to evaluate the user satisfaction. On the other hand in secondary tier, MC-NOMA is employed and the user satisfaction for web, video and audio as popular multimedia services is evaluated by Quality-of-Experience (QoE). Furthermore, each user in secondary tier can be served simultaneously by multiple SBSs over a subcarrier via Joint Transmission (JT). In particular, we formulate a joint optimization problem of power control and scheduling (i.e., user association and subcarrier allocation) in secondary tier to maximize total achievable QoE for the secondary users. An efficient resource allocation mechanism has been developed to handle the non-linear form interference and to overcome the non-convexity of QoE serving functions. The scheduling and power control policy leverage on Augmented Lagrangian Method (ALM). Simulation" +"---\nabstract: |\n This paper concerns the recent Virasoro conjecture for the theory of stable pairs on a 3-fold proposed by Oblomkov, Okounkov, Pandharipande and the author in [@virasorotoricPT]. Here we extend the conjecture to 3-folds with non-$(p,p)$-cohomology and we prove it in two specializations.\n\n For the first specialization, we let $S$ be a simply-connected surface and consider the moduli space $P_n(S\\times {\\mathbb{P}}^1, n[{\\mathbb{P}}^1])$, which happens to be isomorphic to the Hilbert scheme $S^{[n]}$ of $n$ points on $S$. The Virasoro constraints for stable pairs, in this case, can be formulated entirely in terms of descendents in the Hilbert scheme of points. The two main ingredients of the proof are the toric case and the existence of universal formulas for integrals of descendents on $S^{[n]}$. The second specialization consists in taking the 3-fold $X$ to be a cubic and the curve class $\\beta$ to be the line class. In this case we compute the full theory of stable pairs using the geometry of the Fano variety of lines.\naddress: 'ETH Z\u00fcrich, Department of Mathematics'\nbibliography:\n- 'bibliographycubicPT.bib'\ntitle: |\n \\\n (with applications to the Hilbert scheme of points of a surface)\\\n---\n\nIntroduction\n============\n\nStable pairs\n------------\n\nLet $X$ be" +"---\nabstract: 'Current action recognition systems require large amounts of training data for recognizing an action. Recent works have explored the paradigm of zero-shot and few-shot learning to learn classifiers for unseen categories or categories with few labels. Following similar paradigms in object recognition, these approaches utilize external sources of knowledge (eg. knowledge graphs from language domains). However, unlike objects, it is unclear what is the best knowledge representation for actions. In this paper, we intend to gain a better understanding of knowledge graphs (KGs) that can be utilized for zero-shot and few-shot action recognition. In particular, we study three different construction mechanisms for KGs: action embeddings, action-object embeddings, visual embeddings. We present extensive analysis of the impact of different KGs in different experimental setups. Finally, to enable a systematic study of zero-shot and few-shot approaches, we propose an improved evaluation paradigm based on UCF101, HMDB51, and Charades datasets for knowledge transfer from models trained on Kinetics.'\nauthor:\n- Pallabi Ghosh$^1$\n- Nirat Saini$^1$\n- 'Larry S. Davis$^1$'\n- Abhinav Shrivastava$^1$\nbibliography:\n- 'biblio.bib'\ntitle: All About Knowledge Graphs for Actions \n---\n\n[example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth" +"---\nbibliography:\n- 'reference.bib'\n---\n\nIntroduction {#sec:intro}\n============\n\nRandom matrix theory is an important topic in its own right and has been proven to be a powerful tool in a wide range of applications in statistics, high-energy physics, and number theory. Wigner matrices, symmetric matrices with mean-zero independent and identically distributed (i.i.d.) entries (subject to the symmetry constraint), have been a particular focus. Asymptotic and non-asymptotic properties of the spectrum of Wigner matrices have been widely studied in the literature. See, for example, [@anderson2010introduction; @tao2012topics; @vershynin2010introduction] and the references therein.\n\nMotivated by a range of applications, heteroskedastic Wigner-type matrices, random matrices with independent heteroskedastic entries, have attracted much recent attention. A central problem of interest is the characterization of the dependence of the spectral norm $\\|\\cdot\\|$ (i.e., the largest singular value of the matrix) of a heteroskedastic Wigner-type matrix on the variances of its entries. To answer this question, Ajanki, Erd\u0151s, Kr\u00fcger [@ajanki2017universality] established the asymptotic behavior of the resolvent, a local law down to the smallest spectral resolution scale, and bulk universality for the heteroskedastic Wigner-type matrix. Bandeira and van Handel [@bandeira16sharp] proved an non-asymptotic upper bound for the spectral norm. More specifically, let $Z=(Z_{ij})$ be a $p\\times p$" +"---\nabstract: 'This paper describes the participation of the QMUL-SDS team for Task 1 of the CLEF 2020 `CheckThat!` shared task. The purpose of this task is to determine the check-worthiness of tweets about COVID-19 to identify and prioritise tweets that need fact-checking. The overarching aim is to further support ongoing efforts to protect the public from fake news and help people find reliable information. We describe and analyse the results of our submissions. We show that a CNN using COVID-Twitter-BERT (CT-BERT) enhanced with numeric expressions can effectively boost performance from baseline results. We also show results of training data augmentation with rumours on other topics. Our best system ranked fourth in the task with encouraging outcomes showing potential for improved results in the future.'\nauthor:\n- 'Rabab Alkhalifa, Theodore Yoong, Elena Kochkina, Arkaitz Zubiaga,'\n- Maria Liakata\nbibliography:\n- 'bib.bib'\ntitle: |\n QMUL-SDS at `CheckThat!` 2020:\\\n Determining COVID-19 Tweet Check-Worthiness Using an Enhanced CT-BERT with Numeric Expressions\n---\n\nIntroduction\n============\n\nThe vast majority of people seek information online and consider it a touchstone of guidance and authority [@miller2012online]. In particular, social media has become the key resource to go to for following updates during times of crisis [@palen2008online]. Any" +"---\nauthor:\n- |\n Flore Rembert$\\,^{1,\\star}$, Damien Jougnot$\\,^{1}$, Luis Guarracino$\\,^{2}$\\\n \\\n $^{1}\\ $Sorbonne Universit\u00e9, CNRS, UMR 7619 METIS, FR-75005 Paris, France\\\n $^{2}\\ $CONICET, Faculdad de Ciencias Astron\u00f3micas y Geof\u00edsicas, Universidad Nacional de La Plata, Paseo del Bosque s/n, 1900 La Plata, Argentina\\\n \\\n $^{\\star}\\ $Corresponding author: flore.rembert@sorbonne-universite.fr\nbibliography:\n- 'article\\_tortuous\\_v9.bib'\ntitle: 'A fractal model for the electrical conductivity of water-saturated porous media during mineral precipitation-dissolution processes'\n---\n\n#### Highlights\n\n- A new electrical conductivity model is obtained from a fractal upscaling procedure\n\n- The formation factor is obtained from microscale properties of the porous medium\n\n- Transport properties are predicted from the electrical conductivity\n\n- The model can reproduce dissolution and precipitation processes in carbonates\n\n#### Abstract\n\nPrecipitation and dissolution are prime processes in carbonate rocks and being able to monitor them is of major importance for aquifer and reservoir exploitation or environmental studies. Electrical conductivity is a physical property sensitive both to transport phenomena of porous media and to dissolution and precipitation processes. However, its quantitative use depends on the effectiveness of the petrophysical relationship to relate the electrical conductivity to hydrological properties of interest. In this work, we develop a new physically-based model to estimate the electrical conductivity" +"---\nabstract: |\n We study the existence of ground state standing waves, of prescribed mass, for the nonlinear Schr\u00f6dinger equation with mixed power nonlinearities $$\\begin{aligned}\n i \\partial_t v + \\Delta v + \\mu v |v|^{q-2} + v |v|^{2^* - 2} = 0, \\quad (t, x) \\in {\\mathbb{R}}\\times {\\mathbb{R}^N}, \\end{aligned}$$ where $N \\geq 3$, $v: {\\mathbb{R}}\\times {\\mathbb{R}^N}\\to {\\mathbb{C}}$, $\\mu > 0$, $2 < q < 2 + 4/N $ and $2^* = 2N/(N-2)$ is the critical Sobolev exponent. We show that all ground states correspond to local minima of the associated Energy functional. Next, despite the fact that the nonlinearity is Sobolev critical, we show that the set of ground states is orbitally stable. Our results settle a question raised by N. Soave [@Soave2020Sobolevcriticalcase].\\\naddress:\n- ' **[Louis Jeanjean]{}** Laboratoire de Math\u00e9matiques (CNRS UMR 6623), Universit\u00e9 de Bourgogne Franche-Comt\u00e9, Besan\u00e7on 25030, France'\n- ' **[Jacek Jendrej]{}** CNRS and LAGA (CNRS UMR 7539), Universit'' e Sorbonne Paris Nord, Villetaneuse 93430, France'\n- ' **[Thanh Trung Le ]{}** Laboratoire de Math\u00e9matiques (CNRS UMR 6623), Universit\u00e9 de Bourgogne Franche-Comt\u00e9, Besan\u00e7on 25030, France'\n- ' **[Nicola Visciglia ]{}** Dipartimento di Matematica, Universit\u00e0 Degli Studi di Pisa, Largo Bruno Pontecorvo, 5, 56127, Pisa, Italy'\nauthor:\n-" +"---\nabstract: 'The COVID-19 pandemic brought unprecedented levels of disruption to the local and regional transportation networks throughout the United States, especially the Motor City\u2014Detroit. That was mainly a result of swift restrictive measures such as statewide quarantine and lock-down orders to confine the spread of the virus and flatten-the-curve along with a natural reaction of the population to the rising number of COVID-19-related cases and deaths. This work is driven by analyzing five types of real-world data sets from Detroit related to: traffic volume, daily cases, weather, social distancing index, and crashes from January 2019 to June 2020. The primary goal is figuring out the impacts of COVID-19 on the transportation network usage (traffic volume) and safety (crashes) for the City of Detroit, exploring the potential correlation between these diverse data features, and determining whether each type of data (traffic volume data) could be a useful factor in the confirmed-cases prediction. In addition, early future prediction of COVID-19 rates can be a vital contributor to live-saving advanced preventative and preparatory responses. In order to achieve this goal, a deep learning model was developed using long short-term memory networks to predict the number of confirmed cases within the next one" +"---\nabstract: 'In this work we develop tools to address combinatorial optimization problems with a cardinality constraint, in which only a subset of variables end up having nonzero values. Firstly, we introduce a new heuristic pruning method that iteratively discards variables through a hybrid quantum-classical optimization step. Secondly, we analyse the use of soft constraints in the form of \u201cchemical potentials\" to control the number of non-zero variables. We illustrate the power of both techniques using the problem of index tracking, which aims to mimicking the performance of a financial index with a balanced subset of assets. We also compare the performance of different state-of-the-art quantum variational optimization algorithms in our pruning method.'\naddress:\n- 'BBVA Client Solutions Research & Patents, Calle Sauceda 28, 28050 Madrid, Spain.'\n- 'Instituto de F\u00edsica Fundamental, IFF-CSIC, Calle Serrano 113b, 28006 Madrid, Spain.'\n- 'Instituto de F\u00edsica Fundamental, IFF-CSIC, Calle Serrano 113b, 28006 Madrid, Spain.'\nauthor:\n- 'Samuel Fern\u00e1ndez-Lorenzo'\n- Diego Porras\n- 'Juan Jos\u00e9 Garc\u00eda-Ripoll'\nbibliography:\n- 'biblio.bib'\ntitle: 'Hybrid quantum-classical optimization with cardinality constraints and applications to finance'\n---\n\nIntroduction\n============\n\nMany relevant problems in quantitative finance translate into daunting computational tasks, such as combinatorial optimization problems and Monte Carlo simulations\u00a0[@Brandimarte2013]," +"---\nabstract: 'We describe a variety of searches for new physics beyond the Standard Model of particle physics which may be enabled in the coming years by the use of optically levitated masses in high vacuum. Such systems are expected to reach force and acceleration sensitivities approaching (and possibly eventually exceeding) the standard quantum limit over the next decade. For new forces or phenomena that couple to mass, high precision sensing using objects with masses in the fg\u2013ng range have significant discovery potential for new physics. Such applications include tests of fundamental force laws, searches for non-neutrality of matter, high-frequency gravitational wave detectors, dark matter searches, and tests of quantum foundations using massive objects.'\naddress:\n- 'Wright Laboratory, Department of Physics, Yale University, New Haven, CT, USA'\n- 'Center for Fundamental Physics, Department of Physics and Astronomy, Northwestern University, Evanston, IL, USA'\nauthor:\n- 'David C. Moore'\n- 'Andrew A. Geraci'\nbibliography:\n- 'references.bib'\ntitle: Searching for new physics using optically levitated sensors\n---\n\nIntroduction\n============\n\nThe Standard Model (SM) of particle physics is one of the most precisely tested theories developed to date\u00a0[@PDG2018]. For example, the predictions of quantum electrodynamics for the electron magnetic moment agree with experimental" +"---\nabstract: 'Data has exponentially grown in the last years, and knowledge graphs constitute powerful formalisms to integrate a myriad of existing data sources. Transformation functions \u2013 specified with function-based mapping languages like FunUL and RML+FnO \u2013 can be applied to overcome interoperability issues across heterogeneous data sources. However, the absence of engines to efficiently execute these mapping languages hinders their global adoption. We propose FunMap, an interpreter of function-based mapping languages; it relies on a set of lossless rewriting rules to push down and materialize the execution of functions in initial steps of knowledge graph creation. Although applicable to any function-based mapping language that supports joins between mapping rules, FunMap feasibility is shown on RML+FnO. FunMap reduces data redundancy, e.g., duplicates and unused attributes, and converts RML+FnO mappings into a set of equivalent rules executable on RML-compliant engines. We evaluate FunMap performance over real-world testbeds from the biomedical domain. The results indicate that FunMap reduces the execution time of RML-compliant engines by up to a factor of 18, furnishing, thus, a scalable solution for knowledge graph creation.'\nauthor:\n- |\n Samaneh Jozashoori, David Chaves-Fraga, Enrique Iglesias,\\\n Maria-Esther Vidal,Oscar Corcho\nbibliography:\n- 'biblio.bib'\ntitle: 'FunMap: Efficient Execution of Functional Mappings" +"---\nabstract: |\n In commercial inverters, an LCL filter is considered an integral part to filter out the switching harmonics and generate a sinusoidal output voltage. The existing literature on the averaged virtual oscillator controller (VOC) dynamics is for current feedback before the output LCL filter that contains the switching harmonics or for inductive filters ignoring the effect of filter capacitance. In this work, a new version of averaged VOC dynamics is presented for islanded inverters with current feedback after the LCL filter thus avoiding the switching harmonics going into the VOC. The embedded droop-characteristics within the averaged VOC dynamics are identified and a parameter design procedure is presented to regulate the output voltage magnitude and frequency according to the desired ac-performance specifications. Further, a power dispatch technique based on this newer version of averaged VOC dynamics is presented to simultaneously regulate both the active and reactive output power of two parallel-connected islanded inverters. The control laws are derived and a power security constraint is presented to determine the achievable power set-point. Simulation results for load transients and power dispatch validate the proposed version of averaged VOC dynamics.\\\nauthor:\n- |\n M. Ali$^*$, H. I. Nurdin and J. E. Fletcher\\" +"---\nauthor:\n- 'Z. Brown'\n- 'G. Mishtaku'\n- 'R. Demina'\n- 'Y. Liu'\n- 'C. Popik'\nbibliography:\n- 'cf\\_bib.bib'\ndate: 'Received XX XX, XXXX; accepted YY YY, YYYY'\ntitle: An Algorithm to locate the centers of Baryon Acoustic Oscillations\n---\n\nIntroduction {#intro}\n============\n\nBaryon acoustic oscillations (BAO) are density waves which formed in the photon-baryon plasma in the primordial universe [@sunyaev1970small; @peebles1973statistical; @eisenstein1998baryonic; @bassett2009baryon]. They are the result of the competition between the gravitational attraction pulling matter (mostly dark matter) into regions of high local density and radiation pressure pushing baryonic matter away from regions of high density. The resulting density waves, propagating at the sound speed of the plasma, produced \u2018bubbles\u2019 with high-density centers, relatively underdense interiors, and overdense spherical \u2019shells.\u2019 At the time of recombination when photons and matter fell out of thermal equilibrium, the bubbles \u2018froze\u2019 into the matter distribution, with the centers remaining enriched in dark matter and shells enriched in baryonic matter [@eisenstein2007robustness; @tansella2018second]. After recombination the BAO centers and shells became the seeds of galaxy formation.\n\nAt late times the BAO can be observed as a preferred length scale in the distribution of galaxies using the two-point correlation function (2pcf) and its Fourier" +"---\nabstract: 'Dynamics of a particle diffusing in a confinement can be seen a sequence of bulk-diffusion-mediated hops on the confinement surface. Here, we investigate the surface hopping propagator that describes the position of the diffusing particle after a prescribed number of encounters with that surface. This quantity plays the central role in diffusion-influenced reactions and determines their most common characteristics such as the propagator, the first-passage time distribution, and the reaction rate. We derive explicit formulas for the surface hopping propagator and related quantities for several Euclidean domains: half-space, circular annuli, circular cylinders, and spherical shells. These results provide the theoretical ground for studying diffusion-mediated surface phenomena. The behavior of the surface hopping propagator is investigated for both \u201cimmortal\u201d and \u201cmortal\u201d particles.'\nauthor:\n- 'Denis\u00a0S.\u00a0Grebenkov'\ntitle: |\n Surface Hopping Propagator:\\\n An Alternative Approach to Diffusion-Influenced Reactions\n---\n\nIntroduction\n============\n\nIn many natural phenomena, particles diffuse in a confinement towards its surface where they can react, permeate, relax their activity or be killed. Examples include heterogeneous catalysis, permeation across cell membranes, filtering in porous media, surface relaxation in nuclear magnetic resonance, and animal foraging [@Rice; @Redner; @Schuss; @Metzler; @Oshanin; @Grebenkov07; @Benichou11; @Bressloff13; @Benichou14]. These phenomena are conventionally described" +"---\nabstract: 'We compute gravitational waves from inspiraling stellar-mass compact objects on the equatorial plane of a massive spinning black hole (BH). Our inspiral orbits are computed by taking into account the adiabatic change of orbital parameters due to gravitational radiation in the lowest order in mass ratio. We employ an interpolation method to compute the adiabatic change at arbitrary points inside the region of orbital parameter space computed in advance. Using the obtained inspiral orbits and associated gravitational waves, we compute power spectra of gravitational waves and the signal-to-noise ratio (SNR) for several values of the BH spin, the masses of the binary, and the initial orbital eccentricity during a hypothetical three-year Laser Interferometer Space Antenna observation before final plunge. We find that (i) the SNR increases as the BH spin and the mass of the compact object increase for the BH mass $M \\agt 10^6M_\\odot$, (ii) the SNR has a maximum for $M \\approx 10^6M_\\odot$, and (iii) the SNR increases as the initial eccentricity increases for $M=10^6M_\\odot$. We also show that incorporating the contribution from the higher multipole modes of gravitational waves is crucial for enhancing the detection rate.'\nauthor:\n- Ryuichi Fujita\n- Masaru Shibata\ntitle: Extreme" +"---\nabstract: 'The proliferation of speech technologies and rising privacy legislation calls for the development of privacy preservation solutions for speech applications. These are essential since speech signals convey a wealth of rich, personal and potentially sensitive information. Anonymisation, the focus of the recent VoicePrivacy initiative, is one strategy to protect speaker identity information. Pseudonymisation solutions aim not only to mask the speaker identity and preserve the linguistic content, quality and naturalness, as is the goal of anonymisation, but also to preserve voice distinctiveness. Existing metrics for the assessment of anonymisation are ill-suited and those for the assessment of pseudonymisation are completely lacking. Based upon voice similarity matrices, this paper proposes the first intuitive visualisation of pseudonymisation performance for speech signals and two novel metrics for objective assessment. They reflect the two, key pseudonymisation requirements of de-identification and voice distinctiveness.'\naddress: |\n $^1$Laboratoire Informatique d\u2019Avignon (LIA), Avignon Universit\u00e9, France\\\n $^2$Digital Security Department, EURECOM, France\nbibliography:\n- 'mybib.bib'\ntitle: Speech Pseudonymisation Assessment Using Voice Similarity Matrices\n---\n\n=1\n\n**Index Terms**: pseudonymisation, anonymisation, privacy preservation, VoicePrivacy\n\nIntroduction\n============\n\nThe ubiquity and proliferation of speech technologies and the increase in data protection regulation such as the European General Data Protection Regulation (GDPR)\u00a0[@EU-GDPR-2016]" +"---\nabstract: 'Three-dimensional phase contrast imaging of multiply-scattering samples in X-ray and electron microscopy is extremely challenging, due to small numerical apertures, the unavailability of wavefront shaping optics, and the highly nonlinear inversion required from intensity-only measurements. In this work, we present a new algorithm using the scattering matrix formalism to solve the scattering from a non-crystalline medium from scanning diffraction measurements, and recover the illumination aberrations. Our method will enable 3D imaging and materials characterization at high resolution for a wide range of materials.'\nauthor:\n- Philipp M Pelz\n- Hamish G Brown\n- Jim Ciston\n- Scott D Findlay\n- Yaqian Zhang\n- Mary Scott\n- Colin Ophus\nbibliography:\n- 'apssamp.bib'\ntitle: Reconstructing the Scattering Matrix from Scanning Electron Diffraction Measurements Alone\n---\n\n\\[sec:introduction\\] Introduction\n=================================\n\nPhase contrast imaging is widely used in light [@pluta1988advanced; @clarke2002microscopy], x-ray [@kirz1995soft; @mayo2012line], and electron microscopy [@spence1999future; @glaeser2013invited], due to its high efficiency and resolution. By using coherent radiation to illuminate a sample, we can resolve very small changes in a sample\u2019s local index of refraction through the interference of the illumination wave fronts that the accumulated phase shifts produce [@zernike1935phase]. However, because we can only directly measure the probability density of" +"---\nabstract: 'We provide two independent systematic methods of performing $D$-dimensional physical-state sums in gauge theory and gravity in such a way so that spurious light-cone singularities are not introduced. A natural application is to generalized unitarity in the context of dimensional regularization or theories in higher spacetime dimensions. Other applications include squaring matrix elements to obtain cross sections, and decompositions in terms of gauge-invariant tensors.'\nauthor:\n- Dimitrios Kosmopoulos\ntitle: 'Simplifying $D$-Dimensional Physical-State Sums in Gauge Theory and Gravity'\n---\n\nIntroduction\n============\n\nThe past years have seen remarkable advances to our ability to calculate scattering amplitudes in perturbative quantum field theory. On the one hand, much of this progress relies on choices of variables that exploit the four-dimensional nature of the kinematics, such as spinor-helicity\u00a0[@SpinorHelicity] or momentum-twistor\u00a0[@Hodges:2009hk] variables. On the other hand, for certain problems it is favorable to work in arbitrary dimension $D$. For example, $D$-dimensional methods proved useful in the recent evaluation of the conservative two-body Hamiltonian for spinless black holes to order $G^3$\u00a0[@3PM], relevant to gravitational-wave physics studied by the LIGO and Virgo collaborations\u00a0[@gravWaveDiscovery].\n\nIn multiloop calculations, the preferred regularization scheme is dimensional regularization\u00a0[@Collins]. Occasionally, subtleties arise when one combines four-dimensional" +"---\nabstract: 'Wireless charging coupled with computation offloading in edge networks offers a promising solution for realizing power-hungry and computation intensive applications on user devices. We consider a mutil-access edge computing (MEC) system with collocated MEC servers and base-stations/access points (BS/AP) supporting multiple users requesting data computation and wireless charging. We propose an integrated solution with computation offloading to satisfy the largest proportion of requested wireless charging while keeping the energy consumption at the minimum subject to the MEC-AP transmit power and latency constraints. We propose a novel algorithm to perform data partitioning, time allocation, transmit power control and design the optimal energy beamforming for wireless charging. Our resource allocation scheme offers an energy minimizing solution compared to other schemes while also delivering higher amount of transferred charge to the users.'\nauthor:\n- |\n Rafia Malik and Mai Vu\\\n Department of Electrical and Computer Engineering, Tufts University, MA, USA\\\n Email: rafia.malik@tufts.edu, mai.vu@tufts.edu\nbibliography:\n- './wptbib.bib'\ntitle: ' Energy-efficient Wireless Charging and Computation Offloading In MEC Systems\\'\n---\n\nEdge computing, MEC, wireless power transfer, energy efficient network, optimization\n\nIntroduction\n============\n\nIn recent years, there has been a significant rise in the number of connected devices, coupled with a rampant growth of" +"---\nabstract: 'We classify the spherical birational sheets in a complex simple simply-connected algebraic group. We use the classification to show that, when $G$ is a connected reductive complex algebraic group with simply-connected derived subgroup, two conjugacy classes ${\\mathcal{O}}_1$, ${\\mathcal{O}}_2$ of $G$ lie in the same birational sheet, up to a shift by a central element of $G$, if and only if the coordinate rings of ${\\mathcal{O}}_1$ and ${\\mathcal{O}}_2$ are isomorphic as $G$-modules. As a consequence, we prove a conjecture of Losev for the spherical subvariety of the Lie algebra of $G$.'\nauthor:\n- |\n Filippo Ambrosio, Mauro Costantini\\\n Dipartimento di Matematica \u201cTullio Levi-Civita\u201d\\\n Torre Archimede - via Trieste 63 - 35121 Padova - Italy\\\n ambrosio@math.unipd.it, costantini@math.unipd.it\nbibliography:\n- 'biblio.bib'\ntitle: Spherical birational sheets in reductive groups\n---\n\nMSC-class: 20G20 (Primary) 14M27 (Secondary)\n\nKeywords: birational sheets, spherical conjugacy classes\n\nIntroduction\n============\n\nLet $G$ be a complex connected reductive algebraic group acting on a variety $X$. A sheet of $X$ is an irreducible component of the locally closed subset $\\{x \\in X \\mid \\dim (G \\cdot x) = d\\}$ for some fixed $d$: then $X$ is the finite union of its sheets. Let $B$ be a Borel subgroup of $G$, the" +"---\nabstract: 'In this paper we are focusing on functional inequalities on compact simple edge spaces. More precisely we address the question whether the classical functional inequalities (Sobolev, Poincar\u00e9) hold in this setting, and as a by-product of our methods we obtain an optimality result concerning the $B-$constant of the Sobolev inequality.'\naddress: 'Athens, Greece'\nauthor:\n- Dimitrios Oikonomopoulos\nbibliography:\n- 'preprintBib.bib'\nnocite: '[@*]'\ntitle: Functional Inequalities on Simple Edge Spaces\n---\n\nIntroduction\n============\n\nStratified spaces constitute an important part of singular spaces. Informally speaking, a stratified space is a topological space that can be partitioned into smooth manifolds (strata) of different dimension. Although this statement is far from complete, it is a guiding principle behind the idea of stratified spaces. The study of these spaces was initiated by Whitney [@Whitney], Thom [@stratifiedThom] and Mather [@stratifiedMather] among others. Later, Goresky, MacPherson and Cheeger studied the intersection homology and $L^2$-cohomology of these spaces ([@goreskyhomology1] and [@goreskyphersoncheeger]). It was Cheeger with his seminal paper [@Cheeger] that initiated the study of these spaces from an analytical point of view, and more precisely the properties of the Laplace operator on manifolds with conical singularities. The program of laying the analytic foundations of these spaces" +"---\nabstract: 'This chapter reviews the instrumental variable quantile regression model of [@iqr:ema]. We discuss the key conditions used for identification of structural quantile effects within this model which include the availability of instruments and a restriction on the ranks of structural disturbances. We outline several approaches to obtaining point estimates and performing statistical inference for model parameters. Finally, we point to possible directions for future research.'\nauthor:\n- 'Victor Chernozhukov[^1]Christian Hansen[^2]Kaspar W\u00fcthrich[^3]'\nbibliography:\n- 'IVQRchapterBib.bib'\ntitle: Instrumental Variable Quantile Regression\n---\n\n\\[theorem\\][Acknowledgement]{} \\[theorem\\][Algorithm]{} \\[theorem\\][Axiom]{} \\[theorem\\][Case]{} \\[theorem\\][Claim]{} \\[theorem\\][Conclusion]{} \\[theorem\\][Condition]{} \\[theorem\\][Conjecture]{} \\[theorem\\][Corollary]{} \\[theorem\\][Criterion]{} \\[theorem\\][Definition]{} \\[theorem\\][Example]{} \\[theorem\\][Exercise]{} \\[theorem\\][Lemma]{} \\[theorem\\][Notation]{} \\[theorem\\][Problem]{} \\[theorem\\][Proposition]{} \\[theorem\\][Remark]{} \\[theorem\\][Solution]{} \\[theorem\\][Summary]{}\n\n*Keywords:* instrumental variables, ranks, $C(\\alpha)$-statistic, treatment effects, causal effects\n\n*JEL classification:* C21, C26\n\nIntroduction {#Sec: Introduction}\n============\n\nEmpirical analyses often focus on understanding the structural (causal) relationship between an outcome, $Y$, and variables of interest, $D$. In many cases, interest is not just on how $D$ affects measures of the center of the distribution $Y$ but also on other features of the distribution. For example, in understanding the effect of a government subsidized saving program, one might be more interested in the effect of the program on the lower tail of the savings distribution conditional on individual characteristics" +"---\nabstract: 'We analyse a possible adjustment of Twin Higgs models allowing to have broken electroweak (EW) symmetry at all temperatures below the sigma-model scale $\\sim 1$TeV. The modification consists of increasing the Yukawa couplings of the twins of light SM fermions. The naturalness considerations then imply a presence of relatively light electroweak-charged fermions, which can be produced at the LHC, and decay into SM gauge and Higgs bosons and missing energy. Analysis of experimental bounds shows that such a modified model features an increased amount of fine-tuning compared to the original Twin Higgs models, but still less tuning than the usual pseudo-Nambu-Goldstone Higgs models not improved by $Z_2$ twin symmetry. The obtained modification in the evolution of the EW symmetry breaking strength can, in particular, have interesting implications for models of EW baryogenesis, which we comment on.'\nauthor:\n- Oleksii Matsedonskyi\nbibliography:\n- 'biblio.bib'\ntitle: 'High-Temperature Electroweak Symmetry Breaking by SM Twins'\n---\n\nIntroduction\n============\n\nThe origin of the observed matter-antimatter asymmetry remains an open question of fundamental physics. Among various proposed scenarios to address this question, the electroweak baryogenesis\u00a0[@Shaposhnikov:1987tw; @Cohen:1990it] (EWBG) stands out as the one unavoidably requiring sub-TeV-scale new physics beyond standard model (SM). This new" +"---\nabstract: 'In this work we study the Cauchy problem in Gevrey spaces for a generalized class of equations that contains the case $b=0$ of the $b$-equation. For the generalized equation, we prove that it is locally well-posed for initial data in Gevrey spaces. Moreover, as we move to global well-posedness, we show that for a particular choice of the parameter in the equation the local solution is global analytic in both time and spatial variables.'\naddress:\n- 'Department of Mathematical Sciences, School of Science, Loughborough University, Loughborough, UK '\n- 'Centre of Mathematics, Computation and Cognition, Universidade Federal do ABC, Brazil '\nauthor:\n- \ntitle: 'Local well-posedness and global analyticity for solutions of a generalized $0$-equation'\n---\n\nIntroduction\n============\n\nThe 4-parameter equation $$\\begin{aligned}\n\\label{4par}\n u_t - u_{txx} + au^ku_x - bu^{k-1}u_xu_{xx} - cu^ku_{xxx}=0,\\quad a,b,c\\in {\\mathbb{R}}\\setminus\\{0\\},\\quad k\\in\\mathbb{N},\\end{aligned}$$ studied in [@Eu; @Eu2; @HH], is a generalization of the Camassa-Holm equation [@CH] $$\\begin{aligned}\n\\label{CH}\n u_t - u_{txx} + 3uu_x - 2u_xu_{xx} -uu_{xxx}=0,\\end{aligned}$$ and the Novikov equation [@HW; @Novikov] $$\\begin{aligned}\n\\label{Nov}\n u_t - u_{txx} + 4u^2u_x - 3uu_xu_{xx} - u^2u_{xxx}=0,\\end{aligned}$$ that admits certain scaling transformations as symmetries. The equation has proven to be an interesting mathematical equation once it is possible to choose" +"---\nabstract: 'Manipulation of magnetic ground states by effective control of competing magnetic interactions has led to the finding of many exotic magnetic states. In this direction, the tetragonal Heusler compounds consisting of multiple magnetic sublattices and crystal symmetry favoring chiral Dzyaloshinskii-Moriya interaction (DMI) provide an ideal base to realize non-trivial magnetic structures. Here, we present the observation of a large robust topological Hall effect (THE) in the multi-sublattice Mn$_{2-x}$PtIn Heusler magnets. The topological Hall resistivity, which originates from the non-vanishing real space Berry curvature in the presence of non-zero scalar spin chirality, systematically decreases with decreasing the magnitude of the canting angle of the magnetic moments at different sublattices. With help of first principle calculations, magnetic and neutron diffraction measurements, we establish that the presence of a tunable non-coplanar magnetic structure arising from the competing Heisenberg exchanges and chiral DMI from the D$_{2d}$ symmetry structure is responsible for the observed THE. The robustness of the THE with respect to the degree of non-collinearity adds up a new degree of freedom for designing THE based spintronic devices.'\nauthor:\n- Bimalesh Giri\n- Arif Iqbal Mallick\n- Charanpreet Singh\n- 'P. V. Prakash Madduri'\n- Fran\u00e7oise Damay\n- Aftab Alam\n-" +"---\nabstract: 'We present a unified framework based on primal-dual stochastic mirror descent for approximately solving infinite-horizon Markov decision processes (MDPs) given a generative model. When applied to an average-reward MDP with ${\\mathrm{A_{tot}}}$ total state-action pairs and mixing time bound ${t_{\\mathrm{mix}}}$ our method computes an $\\epsilon$-optimal policy with an expected ${\\widetilde{O}({t_{\\mathrm{mix}}}^2{\\mathrm{A_{tot}}}{\\epsilon}^{-2})}$ samples from the state-transition matrix, removing the ergodicity dependence of prior art. When applied to a $\\gamma$-discounted MDP with ${\\mathrm{A_{tot}}}$ total state-action pairs our method computes an $\\epsilon$-optimal policy with an expected ${\\widetilde{O}((1-\\gamma)^{-4}{\\mathrm{A_{tot}}}{\\epsilon}^{-2})}$ samples, matching the previous state-of-the-art up to a $(1-\\gamma)^{-1}$ factor. Both methods are model-free, update state values and policies simultaneously, and run in time linear in the number of samples taken. We achieve these results through a more general stochastic mirror descent framework for solving bilinear saddle-point problems with simplex and box domains and we demonstrate the flexibility of this framework by providing further applications to constrained MDPs.'\nbibliography:\n- 'references.bib'\ntitle: Efficiently Solving MDPs with Stochastic Mirror Descent\n---\n\n=1\n\nIntroduction\n============\n\nMarkov decision processes (MDPs) are a fundamental mathematical abstraction for sequential decision making under uncertainty and they serve as a basic modeling tool in reinforcement learning (RL) and stochastic control\u00a0[@bertsekas1995neuro; @puterman2014markov; @sutton2018reinforcement]." +"---\nabstract: 'This paper describes the participation of LIMSI\\_UPV team in SemEval-2020 Task 9: Sentiment Analysis for Code-Mixed Social Media Text. The proposed approach competed in SentiMix Hindi-English subtask, that addresses the problem of predicting the sentiment of a given Hindi-English code-mixed tweet. We propose Recurrent Convolutional Neural Network that combines both the recurrent neural network and the convolutional network to better capture the semantics of the text, for code-mixed sentiment analysis. The proposed system obtained 0.69 (best run) in terms of F1 score on the given test data and achieved the 9th place (Codalab username: somban) in the SentiMix Hindi-English subtask.'\nauthor:\n- |\n Somnath Banerjee$^1$, Sahar Ghannay$^1$, Sophie Rosset$^1$, Anne Vilnat$^1$, Paolo Rosso$^2$\\\n $^1$ LIMSI, CNRS, Universit\u00e9 Paris-Saclay, Orsay, France\\\n $^2$ Universitat Polit\u00e8cnica de Val\u00e8ncia, Spain\\\n $^1$[firstname.lastname@limsi.fr]{}\\\n $^2$[prosso@dsic.upv.es]{}\nbibliography:\n- 'semeval2020.bib'\ntitle: 'LIMSI\\_UPV at SemEval-2020 Task 9: Recurrent Convolutional Neural Network for Code-mixed Sentiment Analysis'\n---\n\nIntroduction {#intro}\n============\n\nIn this digital era, users express their personal thoughts and opinions regarding a wide range of topics on social media platforms such as blogs, micro-blogs (e.g., Twitter), and chats (e.g., WhatsApp and Facebook messages). Multilingual societies like India with a decent amount of internet penetration widely adopted such social" +"---\nabstract: |\n Benefits of static type systems are well-known: they offer guarantees that no type error will occur during runtime and, inherently, inferred types serve as documentation on how functions are called. On the other hand, many type systems have to limit expressiveness of the language because, in general, it is undecidable whether a given program is correct regarding types. Another concern that was not addressed so far is that, for logic programming languages such as Prolog, it is impossible to distinguish between intended and unintended failure and, worse, intended and unintended success without additional annotations.\n\n In this paper, we elaborate on and discuss the aforementioned issues. As an alternative, we present a static type analysis which is based on [*plspec*]{}. Instead of ensuring full type-safety, we aim to statically identify type errors on a best-effort basis without limiting the expressiveness of Prolog programs. Finally, we evaluate our approach on real-world code featured in the SWI community packages and a large project implementing a model checker.\nauthor:\n- 'Isabel Wingen, Philipp K\u00f6rner $^\\textrm{\\Letter}$ [ ]{}'\nbibliography:\n- 'paper.bib'\ntitle: 'Effectiveness of Annotation-Based Static Type Inference'\n---\n\nProlog, static verification, optional type system, data specification\n\nIntroduction {#sec:motivation}\n============\n\nDynamic type" +"---\nabstract: 'The present article investigates the role of heavy nuclear clusters and weakly bound light nuclear clusters based on a newly developed equation of state for core collapse supernova studies. A novel approach is brought forward for the description of nuclear clusters, taking into account the quasiparticle approach and continuum correlations. It demonstrates that the commonly employed nuclear statistical equilibrium approach, based on non-interacting particles, for the description of light and heavy clusters becomes invalid for warm nuclear matter near the saturation density. This has important consequences for studies of core collapse supernovae. To this end, we implement this nuclear equation of state provided for arbitrary temperature, baryon density and isospin asymmetry, to spherically symmetric core collapse supernova simulations in order to study the impact on the dynamics as well as on the neutrino emission. For the inclusion of a set of weak processes involving light clusters the rate expressions are derived, including medium modifications at the mean field level. A substantial impact from the inclusion of a variety of weak reactions involving light clusters on the post bounce dynamics nor on the neutrino emission could not be found.'\nauthor:\n- Tobias Fischer\n- Stefan Typel\n- 'Gerd\u00a0R[\u00f6]{}pke'" +"---\nabstract: 'We investigate how the NIHAO galaxies match the observed star formation main sequence (SFMS) and what the origin of its scatter is. The NIHAO galaxies reproduce the SFMS and generally agree with observations, but the slope is about unity and thus significantly larger than observed values. This is because observed galaxies at large stellar masses, although still being part of the SFMS, are already influenced by quenching. This partial suppression of star formation by AGN feedback leads to lower star formation rates and therefore to lower observed slopes. We confirm that including the effects of AGN in our galaxies leads to slopes in agreement with observations. We find the deviation of a galaxy from the SFMS is correlated with its $z=0$ dark matter halo concentration and thus with its halo formation time. This means galaxies with a higher-than-average star formation rate (SFR) form later and vice versa. We explain this apparent correlation with the SFR by re-interpreting galaxies that lie above the SFMS (higher-than-average SFR) as lying to the left of the SFMS (lower-than-average stellar mass) and vice versa. Thus later forming haloes have a lower-than-average stellar mass, this is simply because they have had less-than-average time to" +"---\nabstract: 'For a learning task, Gaussian process (GP) is interested in learning the statistical relationship between inputs and outputs, since it offers not only the prediction mean but also the associated variability. The vanilla GP however struggles to learn complicated distribution with the property of, e.g., heteroscedastic noise, multi-modality and non-stationarity, from massive data due to the Gaussian marginal and the cubic complexity. To this end, this article studies new scalable GP paradigms including the non-stationary heteroscedastic GP, the mixture of GPs and the latent GP, which introduce additional latent variables to modulate the outputs or inputs in order to learn richer, non-Gaussian statistical representation. We further resort to different variational inference strategies to arrive at analytical or tighter evidence lower bounds (ELBOs) of the marginal likelihood for efficient and effective model training. Extensive numerical experiments against state-of-the-art GP and neural network (NN) counterparts on various tasks verify the superiority of these scalable modulated GPs, especially the scalable latent GP, for learning diverse data distributions.'\naddress:\n- 'School of Energy and Power Engineering, Dalian University of Technology, China, 116024'\n- 'School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798'\n- 'Digital Twin Laboratory for Industrial Equipment at" +"---\nabstract: 'We give an elementary and self-contained proof of the uniformization theorem for non-compact simply-connected Riemann surfaces.'\naddress: |\n Institute of Mathematics of the Romanian Academy\\\n Calea Grivi\u0163ei 21\\\n 010702 Bucharest\\\n Romania\nauthor:\n- Cipriana Anghel\n- Rare\u015f Stan\ntitle: Uniformization of Riemann surfaces revisited\n---\n\nIntroduction\n============\n\nPaul Koebe and shortly thereafter Henri Poincar\u00e9 are credited with having proved in 1907 the famous *uniformization theorem* for Riemann surfaces, arguably the single most important result in the whole theory of analytic functions of one complex variable. This theorem generated connections between different areas and lead to the development of new fields of mathematics. After Koebe, many proofs of the uniformization theorem were proposed, all of them relying on a large body of topological and analytical prerequisites. Modern authors [@fkra], [@forster] use sheaf cohomology, the Runge approximation theorem, elliptic regularity for the Laplacian, and rather strong results about the vanishing of the first cohomology group of noncompact surfaces. A more recent proof with analytic flavour appears in Donaldson [@don], again relying on many strong results, including the Riemann-Roch theorem, the topological classification of compact surfaces, Dolbeault cohomology and the Hodge decomposition. In fact, one can hardly find in the literature" +"---\nabstract: 'The current study uses a network analysis approach to explore the STEM pathways that students take through their final year of high school in Aotearoa New Zealand. By accessing individual-level microdata from New Zealand\u2019s Integrated Data Infrastructure, we are able to create a co-enrolment network comprised of all STEM assessment standards taken by students in New Zealand between 2010 and 2016. We explore the structure of this co-enrolment network though use of community detection and a novel measure of entropy. We then investigate how network structure differs across sub-populations based on students\u2019 sex, ethnicity, and the socio-economic-status (SES) of the high school they attended. Results show the structure of the STEM co-enrolment network differs across these sub-populations, and also changes over time. We find that, while female students were more likely to have been enrolled in life science standards, they were less well represented in physics, calculus, and vocational (e.g., agriculture, practical technology) standards. Our results also show that the enrolment patterns of the M\u0101ori and Pacific Islands sub-population had higher levels of entropy, an observation that may be explained by fewer enrolments in key science and mathematics standards. Through further investigation of this disparity, we find that" +"---\nabstract: 'Due to the automatic feature extraction procedure via multi-layer nonlinear transformations, the deep learning-based visual trackers have recently achieved a great success in challenging scenarios for visual tracking purposes. Although many of those trackers utilize the feature maps from pre-trained *convolutional neural networks* (CNNs), the effects of selecting different models and exploiting various combinations of their feature maps are still not compared completely. To the best of our knowledge, all those methods use a fixed number of convolutional feature maps without considering the scene attributes (e.g., occlusion, deformation, and fast motion) that might occur during tracking. As a pre-requisition, this paper proposes adaptive *discriminative correlation filters* (DCF) based on the methods that can exploit CNN models with different topologies. First, the paper provides a comprehensive analysis of four commonly used CNN models to determine the best feature maps of each model. Second, with the aid of analysis results as attribute dictionaries, an adaptive exploitation of deep features is proposed to improve the accuracy and robustness of visual trackers regarding video characteristics. Third, the generalization of proposed method is validated on various tracking datasets as well as CNN models with similar architectures. Finally, extensive experimental results demonstrate the effectiveness" +"---\nabstract: 'Strategic argumentation provides a simple model of disputation and negotiation among agents. Although agents might be expected to act in our best interests, there is little that enforces such behaviour. (Maher, 2016) introduced a model of corruption and resistance to corruption within strategic argumentation. In this paper we identify corrupt behaviours that are not detected in that formulation. We strengthen the model to detect such behaviours, and show that, under the strengthened model, all the strategic aims in (Maher, 2016) are resistant to corruption.'\nauthor:\n- |\n Michael J. Maher\\\n Reasoning Research Institute,\\\n Canberra, Australia\\\n E-mail: michael.maher@reasoning.org.au\nbibliography:\n- 'audit\\_stripped.bib'\ndate: '26 April, 2017'\ntitle: Corruption and Audit in Strategic Argumentation\n---\n\nIntroduction\n============\n\nStrategic argumentation is an incomplete-knowledge game in which competing players take turns in adding arguments to a common pool of arguments such that at the end of a player\u2019s turn that player\u2019s strategic aim is (usually temporarily) achieved. A player loses when she cannot successfully complete her turn. Each player knows only her own arguments and the arguments in the common pool.\n\nThis gives a simple but insightful model of disputation and negotiation. It is particularly suited as the basis for legal disputation between" +"---\nabstract: 'Name disambiguation aims to identify unique authors with the same name. Existing name disambiguation methods always exploit author attributes to enhance disambiguation results. However, some discriminative author attributes (e.g., email and affiliation) may change because of graduation or job-hopping, which will result in the separation of the same author\u2019s papers in digital libraries. Although these attributes may change, an author\u2019s co-authors and research topics do not change frequently with time, which means that papers within a period have similar text and relation information in the academic network. Inspired by this idea, we introduce Multi-view Attention-based Pairwise Recurrent Neural Network ([MA-PairRNN]{}) to solve the name disambiguation problem. We divided papers into small blocks based on discriminative author attributes and blocks of the same author will be merged according to pairwise classification results of [MA-PairRNN]{}. [MA-PairRNN]{}combines heterogeneous graph embedding learning and pairwise similarity learning into a framework. In addition to attribute and structure information, [MA-PairRNN]{}also exploits semantic information by meta-path and generates node representation in an inductive way, which is scalable to large graphs. Furthermore, a semantic-level attention mechanism is adopted to fuse multiple meta-path based representations. A Pseudo-Siamese network consisting of two RNNs takes two paper sequences in publication" +"---\nabstract: 'Gamma-ray continuum at $> 10 $ MeV photon energy yields information on $\\gtrsim 0.2 - 0.3$ GeV/nucleon ions at the Sun. We use the general-purpose Monte Carlo code *FLUktuierende KAskade* (FLUKA) to model the transport of ions injected into thick and thin target sources, the nuclear processes that give rise to pions and other secondaries and the escape of the resulting photons from the atmosphere. We give examples of photon spectra calculated with a range of different assumptions about the primary ion velocity distribution and the source region. We show that FLUKA gives results for pion decay photon emissivity in agreement with previous treatments. Through the directionality of secondary products, as well as Compton scattering and pair production of photons prior to escaping the Sun, the predicted spectrum depends significantly on the viewing angle. Details of the photon spectrum in the $\\approx 100$ MeV range may constrain the angular distribution of primary ions and the depths at which they interact. We display a set of thick-target spectra produced making various assumptions about the incident ion energy and angular distribution and the viewing angle. If ions are very strongly beamed downward, or ion energies do not extend much above" +"---\nauthor:\n- 'P.\u00a0Kurf\u00fcrst'\n- 'O.\u00a0Pejcha'\n- 'J.\u00a0Krti\u010dka'\nbibliography:\n- 'bibliography.bib'\ndate: Received\ntitle: 'Supernova explosions interacting with aspherical circumstellar material: Implications for light curves, spectral line profiles, and polarization'\n---\n\nIntroduction {#intro}\n============\n\nWhen an expanding supernova (SN) blast wave collides with a dense pre-existing circumstellar material (CSM), the gas in the collision region is compressed and becomes radiative. Depending on the CSM properties, a substantial fraction of the SN kinetic energy might be converted into radiation. Such SN\u2013CSM interactions can give rise to transients that are more luminous than ordinary SNe, including a subset of recently-recognized superluminous SNe [e.g., @2012Sci...337..927G; @smith17_handbook]. We show light curves of a few examples of interacting SNe in Fig.\u00a0\\[Nyholm\\]. Since the most radiatively efficient collisions occur with CSM located near the progenitor, the interacting SNe reveal the mass-loss history of massive stars shortly before the collapse of the core [e.g., @smith07; @smith14; @stritzinger12].\n\nThe observed properties of the SN\u2013CSM interaction often require an aspherical CSM distribution. The evidence comes from multicomponent line profiles in SN spectra [e.g., @chugai94; @fransson02; @smith15; @andrews17; @andrews18], (spectro)polarimetry [e.g., @leonard00; @wang08; @chornock10; @patat11], or combinations thereof [@bilinski18; @bilinski20]. Aspherical CSM can lead to observable" +"---\nabstract: 'We study the effect of an explicit interaction between two scalar fields components describing dark matter in the context of a recent proposal framework for interaction. We find that, even assuming a very small coupling, it is sufficient to explain the observational effects of a cosmological constant, and also overcome the problems of the $\\Lambda$CDM model without assuming an exotic dark energy.'\nauthor:\n- 'V\u00edctor H. C\u00e1rdenas$^1$'\n- Samuel Lepe$^2$\ntitle: Interacting dark matter and cosmic acceleration\n---\n\nIntroduction\n============\n\nIn the context of the standard model of cosmology, the simplest way we can describe the observations that type Ia supernova are dimmer than expected [@sn1a] is by introducing \u2013 by hand \u2013 a cosmological constant, leading to the claimed accelerated expansion and to establish the so far successful Lambda Cold Dark Matter (LCDM) model. Although this model agreed with almost every observational test, from a theoretical point of view the model can not be taken seriously. First of all, assuming that this model is valid requires us to accept that we live right in a very special time in the history of the universe, something like (again) positioning in the center of the universe (this time including" +"---\nabstract: 'Monument classification can be performed on the basis of their appearance and shape from coarse to fine categories. Although there is much semantic information present in the monuments which is reflected in the eras they were built, its type or purpose, the dynasty which established it, etc. Particularly, Indian subcontinent exhibits a huge deal of variation in terms of architectural styles owing to its rich cultural heritage. In this paper, we propose a framework that utilizes hierarchy to preserve semantic information while performing image classification or image retrieval. We encode the learnt deep embeddings to construct a dictionary of images and then utilize a re-ranking framework on the the retrieved results using DeLF features. The semantic information preserved in these embeddings helps to classify unknown monuments at higher level of granularity in hierarchy. We have curated a large, novel Indian heritage monuments dataset comprising of images of historical, cultural and religious importance with subtypes of eras, dynasties and architectural styles. We demonstrate the performance of the proposed framework in image classification and retrieval tasks and compare it with other competing methods on this dataset.'\nauthor:\n- '{ronakgupta143@gmail.com, prerana@jnu.ac.in, brejesh@ee.iitd.ac.in, varshul.cw@gmail.com }'\nbibliography:\n- 'refs.bib'\ntitle: Semantics Preserving Hierarchy" +"---\nabstract: 'Radiation-induced photocurrent in semiconductor devices simulated using complex physics-based models, which are accurate, but computationally expensive. This presents a challenge for implementing device characteristics in high-level circuit simulations where it is computationally infeasible to evaluate detailed models for multiple individual circuit elements. In this work we demonstrate a procedure for learning compact photocurrent models that are efficient enough to implement in large-scale circuit simulations, but remain faithful to the underlying physics. Our approach utilizes Dynamic Mode Decomposition (DMD), a system identification technique for learning reduced order discrete-time dynamical systems from time series data based on singular value decomposition. To obtain physics-aware device models, we simulate the excess carrier density induced by radiation pulses by solving the Ambipolar Diffusion Equation, then use the simulated internal state as training data for the DMD algorithm. Our results show that the significantly reduced order photocurrent models obtained via this method accurately approximate the dynamics of the internal excess carrier density which can be used to calculate the induced current at the device boundaries while remaining compact enough to incorporate into larger simulations.'\nauthor:\n- 'Joshua Hanson[^1]'\n- 'Pavel Bochev[^2]'\n- 'Biliana Paskaleva[^3]'\nbibliography:\n- 'JoshuaHanson.bib'\ntitle: 'Learning Compact Physics-Aware Photocurrent Models Using" +"---\nabstract: 'The $DDK$ 3-body system is supposed to be bound due to the strongly attractive interaction between the $D$ meson and the $K$ meson in the isospin zero channel. The minimum quark content of this 3-body bound state is $cc\\bar{q}\\bar{s}$ with $q=u,d$. It will be an explicitly exotic tetraquark state once discovered. In order to confirm the phenomenological study of the $DDK$ system, we can refer to lattice QCD as a powerful theoretical tool parallel to the experiment measurement. In this paper, a 3-body quantization condition scheme is derived via the non-relativistic effective theory and the particle-dimer picture in finite volume. Lattice spectrum of this 3-body system is calculated within the existing model inputs. The spectrum shows various interesting properties of the $DDK$ system, and it may reveal the nature of the $D^*(2317)$. This predicated spectrum is expected to be tested in future lattice simulations.'\naddress:\n- 'College of Science, University of Shanghai for Science and Technology, Shanghai 200093, China'\n- 'School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China'\nauthor:\n- 'Jin-Yi Pang'\n- 'Jia-Jun Wu[^1]'\n- 'Li-Sheng Geng[^2]'\ntitle: $DDK$ system in finite volume\n---\n\nIntroduction\n============\n\nThe discovery of the $D_{s0}^{*}(2317)$ [@Aubert:2003fg;" +"---\nabstract: 'A dual-pass differential Fabry\u2013Perot interferometer (DPDFPI) is one candidate of the interferometer configurations utilized in future Fabry\u2013Perot type space gravitational wave antennas, such as Deci-hertz Interferometer Gravitational Wave Observatory. In this paper, the working principle of the DPDFPI has been investigated and necessity to adjust the absolute length of the cavity for the operation of the DPDFPI has been found. In addition, using the 55-cm-long prototype, the operation of the DPDFPI has been demonstrated for the first time and it has been confirmed that the adjustment of the absolute arm length reduces the cavity detuning as expected. This work provides the proof of concept of the DPDFPI for application to the future Fabry\u2013Perot type space gravitational wave antennas.'\naddress:\n- '$^1$ KAGRA Observatory, Institute for Cosmic Ray Research, The University of Tokyo, Kashiwa, Chiba 277-8582, Japan'\n- '$^2$ Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, Sagamihara, Kanagawa 252-5210, Japan'\n- '$^3$ Department of Physics, Graduate School of Science, The University of Tokyo, Bunkyo, Tokyo 113-0033, Japan'\n- '$^4$ KAGRA Observatory, Institute for Cosmic Ray Research, The University of Tokyo, Hida, Gifu 506-1205, Japan'\n- '$^5$ Research Center for the Early Universe, The University of Tokyo," +"---\nabstract: |\n In recent years, industrial robots have been installed in various industries to handle advanced manufacturing and high precision tasks. However, further integration of industrial robots is hampered by their limited flexibility, adaptability and decision making skills compared to human operators. Assembly tasks are especially challenging for robots since they are contact-rich and sensitive to even small uncertainties. While reinforcement learning (RL) offers a promising framework to learn contact-rich control policies from scratch, its applicability to high-dimensional continuous state-action spaces remains rather limited due to high brittleness and sample complexity. To address those issues, we propose different pruning methods that facilitate convergence and generalization. In particular, we divide the task into free and contact-rich sub-tasks, perform the control in Cartesian rather than joint space, and parameterize the control policy. Those pruning methods are naturally implemented within the framework of dynamic movement primitives (DMP). To handle contact-rich tasks, we extend the DMP framework by introducing a coupling term that acts like the human wrist and provides active compliance under contact with the environment. We demonstrate that the proposed method can learn insertion skills that are invariant to space, size, shape, and closely related scenarios, while handling large uncertainties. Finally" +"---\nabstract: 'Quantum mechanical perturbative expressions for second order dynamical magnetoelectric (ME) susceptibilities have been derived and calculated for a small molecular system using the Hubbard Hamiltonian with SU(2) symmetry breaking in the form of spin-orbit coupling (SOC) or spin-phonon coupling. These susceptibilities will have signatures in second harmonic generation spectra. We show that SU(2) symmetry breaking is the key to generate these susceptibilities. We have calculated these ME coefficients by solving the Hamiltonian for low lying excited states using Lanczos method. Varying the Hubbard term along with SOC strength, we find spin and charge and both spin-charge dominated spectra of dynamical ME coefficients. We have shown that intensities of the peaks in the spectra are highest when the magnitudes of Hubbard term and SOC coupling term are in similar range.'\nauthor:\n- Abhiroop Lahiri\n- 'Swapan K. Pati'\nbibliography:\n- 'me.bib'\ntitle: 'Signatures of nonlinear magnetoelectricity in second harmonic spectra of SU(2) symmetry broken quantum many-body systems'\n---\n\nINTRODUCTION\n============\n\nThe study of Magnetoelectric(ME) effect in materials has gained a huge interest due to their potential applications in sensors [@1], ME RAM [@2; @3; @4] and other spintronic devices. The ME effect is observed in materials where there is" +"---\nabstract: 'A Bayesian network is a widely used probabilistic graphical model with applications in knowledge discovery and prediction. Learning a Bayesian network (BN) from data can be cast as an optimization problem using the well-known score-and-search approach. However, selecting a single model (i.e., the best scoring BN) can be misleading or may not achieve the best possible accuracy. An alternative to committing to a single model is to perform some form of Bayesian or frequentist model averaging, where the space of possible BNs is sampled or enumerated in some fashion. Unfortunately, existing approaches for model averaging either severely restrict the structure of the Bayesian network or have only been shown to scale to networks with fewer than 30 random variables. In this paper, we propose a novel approach to model averaging inspired by performance guarantees in approximation algorithms. Our approach has two primary advantages. First, our approach only considers *credible* models in that they are optimal or near-optimal in score. Second, our approach is more efficient and scales to significantly larger Bayesian networks than existing approaches.'\nauthor:\n- |\n Zhenyu A. Liao z6liao@uwaterloo.ca\\\n Charupriya Sharma c9sharma@uwaterloo.ca\\\n David R. Cheriton School of Computer Science\\\n University of Waterloo\\\n Waterloo, ON N2L" +"---\nabstract: 'Automatically detecting personality traits can aid several applications, such as mental health recognition and human resource management. Most datasets introduced for personality detection so far have analyzed these traits for each individual in isolation. However, personality is intimately linked to our social behavior. Furthermore, surprisingly little research has focused on personality analysis using low resource languages. To this end, we present a novel peer-to-peer Hindi conversation dataset, *Vyaktitv*[^1]. It consists of high-quality audio and video recordings of the participants, with Hinglish[^2] textual transcriptions for each conversation. The dataset also contains a rich set of socio-demographic features, like income, cultural orientation, amongst several others, for all the participants. We release the dataset for public use, as well as perform preliminary statistical analysis along the different dimensions. Finally, we also discuss various other applications and tasks for which the dataset can be employed.'\nauthor:\n- \nbibliography:\n- 'bibliography.bib'\ntitle: '*Vyaktitv*: A Multimodal Peer-to-Peer Hindi Conversations based Dataset for Personality Assessment'\n---\n\nMultimedia; Dataset; Human Computer Interaction;\n\nIntroduction\n============\n\nAccording to psychologists, the study of the personality involves addressing some of the most exciting queries on human psychology, including analysis of how different aspects of people\u2019s lives are related, and how" +"---\nbibliography:\n- 'references.bib'\n---\n\n**Trends in bird abundance differ among protected forests but not bird guilds**\\\nJeffrey W. Doser^1,\\ 2^, Aaron S. Weed^3^, Elise F. Zipkin^2,\\ 4^, Kathryn M. Miller^5^,\\\nAndrew O. Finley^1,\\ 2,\\ 6^\n\n^1^Department of Forestry, Michigan State University, East Lansing, MI, 48824, USA\\\n^2^Ecology, Evolution, and Behavior Program, Michigan State University, East Lansing, MI, 48824, USA\\\n^3^Northeast Temperate Inventory and Monitoring Network, National Park Service, Woodstock, VT 05091, USA\\\n^4^Department of Integrative Biology, Michigan State University, East Lansing, MI, 48824, USA\\\n^5^Northeast Temperate Inventory and Monitoring Network, National Park Service, Bar Harbor, ME, 04609, USA\\\n^6^Department of Geography, Environment, and Spatial Sciences, Michigan State University, East Lansing, MI, 48824, USA\\\n**Corresponding Author**: Jeffrey W. Doser, telephone: (585) 683-4170; email: doserjef@msu.edu; ORCID ID: 0000-0002-8950-9895\\\n**Running Head**: Trends in forest bird abundance\n\nAbstract {#abstract .unnumbered}\n========\n\nImproved monitoring and associated inferential tools to efficiently identify declining bird populations, particularly of rare or sparsely distributed species, is key to informed conservation and management across large spatio-temporal regions. We assess abundance trends for 106 bird species in a network of eight forested national parks located within the northeast U.S.A. from 2006-2019 using a novel hierarchical model. We develop a multi-species," +"---\nabstract: 'Predicting the properties of a molecule from its structure is a challenging task. Recently, deep learning methods have improved the state of the art for this task because of their ability to learn useful features from the given data. By treating molecule structure as graphs, where atoms and bonds are modeled as nodes and edges, graph neural networks (GNNs) have been widely used to predict molecular properties. However, the design and development of GNNs for a given dataset rely on labor-intensive design and tuning of the network architectures. Neural architecture search (NAS) is a promising approach to discover high-performing neural network architectures automatically. To that end, we develop an NAS approach to automate the design and development of GNNs for molecular property prediction. Specifically, we focus on automated development of message-passing neural networks (MPNNs) to predict the molecular properties of small molecules in quantum mechanics and physical chemistry datasets from the MoleculeNet benchmark. We demonstrate the superiority of the automatically discovered MPNNs by comparing them with manually designed GNNs from the MoleculeNet benchmark. We study the relative importance of the choices in the MPNN search space, demonstrating that customizing the architecture is critical to enhancing performance in molecular" +"---\nabstract: 'Image features for retrieval-based localization must be invariant to dynamic objects (e.g. cars) as well as seasonal and daytime changes. Such invariances are, up to some extent, learnable with existing methods using triplet-like losses, given a large number of diverse training images. However, due to the high algorithmic training complexity, there exists insufficient comparison between different loss functions on large datasets. In this paper, we train and evaluate several localization methods on three different benchmark datasets, including Oxford RobotCar with over one million images. This large scale evaluation yields valuable insights into the generalizability and performance of retrieval-based localization. Based on our findings, we develop a novel method for learning more accurate and better generalizing localization features. It consists of two main contributions: (i) a feature volume-based loss function, and (ii) hard positive and pairwise negative mining. On the challenging Oxford RobotCar night condition, our method outperforms the well-known triplet loss by 24.4% in localization accuracy within 5m.'\nauthor:\n- 'Janine Thoma$^{1}$'\n- 'Danda Pani Paudel$^{1}$'\n- 'Ajad Chhatkuli$^{1}$'\n- 'Luc Van Gool$^{1,2}$'\nbibliography:\n- 'biblio.bib'\ndate: 'Uploaded: 9 July 2020'\ntitle: 'Learning Condition Invariant Features for Retrieval-Based Localization from 1M Images'\n---\n\nIntroduction\n============\n\nVision-based localization has" +"---\nauthor:\n- 'G.\u00a0Andr\u00e9\u00a0Oliva'\n- Rolf\u00a0Kuiper\nbibliography:\n- 'fragbib.bib'\n- 'PapersRolf.bib'\ntitle: Modeling disk fragmentation and multiplicity in massive star formation\n---\n\nIntroduction\n============\n\nDuring the formation of massive stars ($\\gtrsim 8{\\,\\mathrm{M_\\odot}}$), radiation pressure becomes important against the gravity of the collapsing molecular cloud. The formation of an accretion disk with polar outflows provides a mechanism for circumventing the radiation pressure barrier [see, e.g., @2010ApJ...722.1556K] and allow the forming star to become massive. This disk is expected to fragment and produce companion stars.\n\nThere is growing observational evidence that supports this scenario. Observations of disks around massive (proto-)stars are reported by, for example, [@2015ApJ...813L..19J], [@2016MNRAS.462.4386I], , [@2018ApJ...860..119G] and . Some of these disks have also been shown to be Keplerian-like.\n\nMoreover, there is evidence that, early in evolution, these disks gain enough mass to become self-gravitating, form spiral arms and fragment. [@2018ApJ...869L..24I] observed a fragmented Keplerian disk around the proto-O star G11.92-0.61 MM1a, with a fragment MM1b in the outskirts of the disk, at $\\sim 2000{\\,\\mathrm{au}}$ from the primary. reported a smaller disk-like structure around the central object in the G351.77-0.54 high-mass hot core, and a fragment at about $\\sim 1000{\\,\\mathrm{au}}$. have also observed spiral arms and" +"---\nabstract: 'This article introduces the Wanca 2017 corpus of texts crawled from the internet from which the sentences in rare Uralic languages for the use of the Uralic Language Identification (ULI) 2020 shared task were collected. We describe the ULI dataset and how it was constructed using the Wanca 2017 corpus and texts in different languages from the Leipzig corpora collection. We also provide baseline language identification experiments conducted using the ULI 2020 dataset.'\nauthor:\n- |\n Tommi Jauhiainen\\\n Department of Digital Humanities\\\n University of Helsinki\\\n [tommi.jauhiainen@helsinki.fi ]{}\\\n Heidi Jauhiainen\\\n Department of Digital Humanities\\\n University of Helsinki\\\n [ heidi.jauhiainen@helsinki.fi]{}\\\n Niko Partanen\\\n Department of Finnish, Finno-Ugrian\\\n and Scandinavian Studies\\\n University of Helsinki\\\n [niko.partanen@helsinki.fi]{}\\\n Krister Lind\u00e9n\\\n Department of Digital Humanities\\\n University of Helsinki\\\n [krister.linden@helsinki.fi]{}\\\nbibliography:\n- 'coling2020.bib'\ntitle: 'Uralic Language Identification (ULI) 2020 shared task dataset and the Wanca 2017 corpus'\n---\n\nIntroduction {#intro}\n============\n\nAs part of the Finno-Ugric and the Internet project (SUKI), we have collected textual material for some of the more endangered Uralic languages from the internet [@jauhiainen5]. In this paper, we introduce the Wanca 2017 corpus which will be published in the Language Bank of Finland[^1] as a downloadable package as well as through the Korp[^2]" +"---\nabstract: 'On the basis of an initial interest in symmetric cryptography, in the present work we study a chain of subgroups. Starting from a Sylow $2$-subgroup of $\\operatorname{AGL}(2,n)$, each term of the chain is defined as the normalizer of the previous one in the symmetric group on $2^n$ letters. Partial results and computational experiments lead us to conjecture that, for large values of $n$, the index of a normalizer in the consecutive one does not depend on $n$. Indeed, there is a strong evidence that the sequence of the logarithms of such indices is the one of the partial sums of the numbers of partitions into at least two distinct parts.'\naddress: |\n DISIM\\\n Universit\u00e0 degli Studi dell\u2019Aquila\\\n via Vetoio\\\n I-67100 Coppito (AQ)\\\n Italy\nauthor:\n- Riccardo Aragona\n- Roberto Civino\n- Norberto Gavioli\n- Carlo Maria Scoppola\nbibliography:\n- 'sym2n\\_ref.bib'\ntitle: 'A Chain of Normalizers in the Sylow $2$-subgroups of the symmetric group on $2^n$ letters'\n---\n\n[^1]\n\nIntroduction\n============\n\nLet $n$ be a non-negative integer and let $\\operatorname{Sym}(2^n)$ denote the symmetric group on $2^n$ letters. The study of the conjugacy class in $\\operatorname{Sym}(2^n)$ of the elementary abelian regular $2$-subgroups has recently drawn attention for its application to" +"---\nabstract: |\n In [@VH], Agol proved the Virtual Haken and Virtual Fibering Conjectures by confirming a conjecture of Wise: Every cubulated hyperbolic group is virtually special. We extend this result to cocompactly cubulated relatively hyperbolic groups with minimal assumptions on the parabolic subgroups. Our proof proceeds by first recubulating to obtain an improper action with controlled stabilizers (a *weakly relatively geometric* action), and then Dehn filling to obtain many cubulated hyperbolic quotients. We apply our results to prove the Relative Cannon Conjecture for certain cubulated or partially cubulated relatively hyperbolic groups.\n\n One of our main results (Theorem\u00a0\\[t:RH Agol\\]) recovers via different methods a theorem of Oreg\u00f3n-Reyes [@OregonReyes].\naddress:\n- 'Department of Mathematics, Statistics, and Computer Science, University of Illinois at Chicago, 322 Science and Engineering Offices (M/C 249), 851 S. Morgan St., Chicago, IL 60607-7045'\n- 'Department of Mathematics, 310 Malott Hall, Cornell University, Ithaca, NY 14853'\nauthor:\n- Daniel Groves\n- Jason Fox Manning\ntitle: Specializing cubulated relatively hyperbolic groups\n---\n\n[^1]\n\nIntroduction\n============\n\nIf $G$ is a group with a proper, cellular cocompact action on a $(0)$ cube complex $X$, we say $G$ is *cubulated by $X$*. (Some authors drop the cocompactness assumption.) Actions on cube" +"---\nabstract: 'We review some parametric families of 2-qubit states for which concurrence and maximal singlet fraction (MSF) have different and even opposite behaviour. For states considered in this work, maximal achievable fidelity (MFA), a quantity derived by Verstraete [*et al.*]{} in [@Verstraete-MAF], shows a better agreement with concurrence and complies with important features required to be considered a good entanglement measure.'\nauthor:\n- 'Hermann L. Albrecht Q.'\n- 'Douglas F. Mundarain'\nbibliography:\n- 'biblio.bib'\ntitle: 'Maximal singlet fraction vs. maximal achievable fidelity as proper entanglement measures'\n---\n\nIntroduction\n============\n\nEntanglement is at the very heart of quantum mechanics and is not \u201cone but rather *the* characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought\u201d [@schrodinger_1935], [@schrodinger_1936] [@Horodecki-Ent]. Many applications of quantum communication and quantum computation use entanglement as its primordial resource for performing certain tasks that classical resources cannot do as well [@Bennett_Teleport], [@BB84], [@Ekert_Crypto]. Although it has been shown that non entangled states may display non locality and can be used in quantum information theory [@qDiscord-Olliver], [@qDiscord-Henderson], [@Modi-qDiscord], [@qNolocal_Ent], [@ExpQC_NoEnt] entanglement remains of the utmost importance in the field. The study of entanglement and its measures remains a highly active" +"---\nabstract: |\n Renewable sources are taking center stage in electricity generation. However, matching supply with demand in a renewable-rich system is a difficult task due to the intermittent nature of renewable resources (wind, solar, etc.). As a result, Demand Response (DR) programs are an essential part of the modern grid. An efficient DR technique is to devise different pricing schemes that encourage customers to reduce or shift the electric load.\n\n In this paper, we consider a market model for DR using Block Rate Pricing (BRP) of two blocks. We use a utility maximization approach in a competitive market. We show that when customers are price taking and the utility cost function is quadratic the resulting system achieves an equilibrium. Moreover, the equilibrium is unique and efficient, which maximizes social welfare. A distributed algorithm is proposed to find the optimal pricing of both blocks and the load. Both the customers and the utility runs the market. The proposed scheme encourages customers to curtail or shift their load. Numerical results are presented to validate our technique.\nauthor:\n- Haris Mansoor\n- Naveed Arshad\nbibliography:\n- 'reference.bib'\ntitle: Market Model for Demand Response under Block Rate Pricing\n---\n\nIntroduction\n============\n\nMany countries" +"---\nabstract: 'In future wireless networks, we anticipate that a large number of devices will connect to mobile networks through moving relays installed on vehicles, in particular in public transport vehicles. To provide high-speed moving relays with accurate channel state information different methods have been proposed, among which predictor antenna (PA) is one of the promising ones. Here, the PA system refers to a setup where two sets of antennas are deployed on top of a vehicle, and the front antenna(s) can be used to predict the channel state information for the antenna(s) behind. In this paper, we study the delay-limited performance of PA systems using adaptive rate allocations. We use the fundamental results on the achievable rate of finite block-length codes to study the system throughput and error probability in the presence of short packets. Particularly, we derive closed-form expressions for the error probability, the average transmit rate as well as the optimal rate allocation, and study the effect of different parameters on the performance of PA systems. The results indicate that rate adaptation under finite block-length codewords can improve the performance of the PA system with spatial mismatch.'\nauthor:\n- \n- \n- \nbibliography:\n- 'main.bib'\ntitle: |\n Predictor" +"---\nabstract: 'In this paper we propose a new family of algorithms, ATENT, for training adversarially robust deep neural networks. We formulate a new loss function that is equipped with an additional entropic regularization. Our loss function considers the contribution of adversarial samples that are drawn from a specially designed distribution in the data space that assigns high probability to points with high loss and in the immediate neighborhood of training samples. Our proposed algorithms optimize this loss to seek adversarially robust valleys of the loss landscape. Our approach achieves competitive (or better) performance in terms of robust classification accuracy as compared to several state-of-the-art robust learning approaches on benchmark datasets such as MNIST and CIFAR-10.'\nauthor:\n- |\n Gauri Jagatap, Ameya Joshi, Animesh Basak Chowdhury, Siddharth Garg, and Chinmay Hegde[^1]\\\n New York University\\\n `{gauri.jagatap,ameya.joshi,abc586,sg175,chinmay.h}@nyu.edu`\nbibliography:\n- 'biblio.bib'\ntitle: |\n Adversarially Robust Learning via\\\n Entropic Regularization\n---\n\nIntroduction {#sec:intro}\n============\n\nDeep neural networks have led to significant breakthroughs in the fields of computer vision [@krizhevsky2012imagenet], natural language processing [@zhang2020adversarial], speech processing [@carlini2016hidden], recommendation systems [@tang2019adversarial] and forensic imaging [@rota2016bad]. However, deep networks have also been shown to be very susceptible to carefully designed \u201cattacks\" \u00a0[@goodfellow2014explaining; @papernot2016transferability; @biggio2018wild]. In particular, the" +"---\nabstract: |\n In this paper, we mainly analyze the long-time asymptotics of high-order soliton for the Hirota equation. Two different Riemann-Hilbert representations of Darboux matrix with high-order soliton are given to establish the relationships between inverse scattering method and Darboux transformation. The asymptotic analysis with single spectral parameter is derived through the formulas of determinant directly. Furthermore, the long-time asymptotics with $k$ spectral parameters is given by combining the iterated Darboux matrix and the result of high-order soliton with single spectral parameter, which discloses the structure of high-order soliton clearly and is possible to be utilized in the optic experiments.\n\n [**Keywords:**]{}Hirota equation, Asymptotic analysis, High-order soliton\naddress:\n- 'School of Mathematics, South China University of Technology, Guangzhou, China, 510641'\n- 'School of Mathematics, South China University of Technology, Guangzhou, China, 510641'\nauthor:\n- Xiaoen Zhang\n- Liming Ling\ntitle: 'Asymptotic analysis of high-order soliton for the Hirota equation'\n---\n\nIntroduction\n============\n\nWe are concerned with the following Hirota equation, $$\\label{eq:Hequation}\n{\\mathrm{i}}q_t+\\gamma\\left(q_{xx}+2|q|^2q\\right)+{\\mathrm{i}}\\delta\\left(q_{xxx}+6|q|^2q_x\\right)=0,$$ which was first derived by Hirota [@hirota-JMP-1973]. It can be considered as the modified nonlinear Schr\u00f6dinger (NLS) equation with high-order dispersion and time-delay corrections to the cubic nonlinearity. For $\\gamma=1$ and $\\delta=0$, Eq. can be reduced to the" +"---\nabstract: 'We consider a version of the Gross-Neveu model in 1+1 dimensions with discrete chiral and continuous flavor symmetry (isospin). In 2+1 dimensions, this model is known as chiral Heisenberg Gross-Neveu model. Spontaneous symmetry breaking and the emergence of two massless and one massive scalar bosons are shown. A duality to the Nambu\u2013Jona-Lasinio model with isospin is exhibited, provided that the isovector pseudoscalar mean field is constrained to a plane in isospin space. This enables us to find the phase diagram as a function of temperature, chemical potential and isospin chemical potential as well as twisted kinks. A bare mass term acts quite differently when added to this model as compared to other chiral variants of the Gross-Neveu model.'\nauthor:\n- 'Michael Thies[^1]'\ntitle: 'Duality study of chiral Heisenberg Gross-Neveu model in 1+1 dimensions'\n---\n\nIntroduction {#sect1}\n============\n\nFour-fermion models in 1+1 dimensions can teach us a lot about strongly interacting relativistic systems. Well-known examples are the Gross-Neveu (GN) model [@1] with Z$_2\\times$Z$_2$ chiral symmetry ($\\psi \\to \\pm\\gamma_5 \\psi$), $${\\cal L}_{\\rm GN} = \\bar{\\psi} i \\partial \\!\\!\\!/ \\psi + \\frac{g^2}{2} \\left(\\bar{\\psi}\\psi \\right)^2\n\\label{I1}$$ and the Nambu\u2013Jona-Lasinio (NJL) model [@2] with U(1)$\\times$U(1) chiral symmetry ($\\psi \\to \\exp\\{i (\\alpha + \\beta" +"---\nabstract: 'The appearance of half-quantized thermal Hall conductivity in $\\alpha$-RuCl$_3$ in the presence of in-plane magnetic fields has been taken as a strong evidence for Kitaev spin liquid. Apart from the quantization, the observed sign structure of the thermal Hall conductivity is also consistent with predictions from the exact solution of the Kitaev model. Namely, the thermal Hall conductivity changes sign when the field direction is reversed with respect to the heat current, which is perpendicular to one of the three nearest neighbor bonds on the honeycomb lattice. On the other hand, it is almost zero when the field is applied along the bond direction. Here, we show that such a peculiar sign structure of the thermal Hall conductivity is a generic property of the polarized state in the presence of in-plane magnetic-fields. In this case, thermal Hall effect arises from topological magnons with finite Chern numbers and the sign structure follows from the symmetries of the momentum space Berry curvature. Using a realistic spin model with bond-dependent interactions, we show that the thermal Hall conductivity can have a magnitude comparable to that observed in the experiments. Hence the sign structure alone cannot make a strong case for Kitaev" +"---\nabstract: 'The space-based gravitational-wave observatory LISA relies on a form of synthetic interferometry (time-delay interferometry, or TDI) where the otherwise overwhelming laser phase noise is canceled by linear combinations of appropriately delayed phase measurements. These observables grow in length and complexity as the realistic features of the LISA orbits are taken into account. In this paper we outline an *implicit* formulation of TDI where we write the LISA likelihood directly in terms of the basic phase measurements, and we marginalize over the laser phase noises in the limit of infinite laser-noise variance. Equivalently, we rely on TDI observables that are *defined numerically* (rather than algebraically) from a discrete-filter representation of the laser propagation delays. Our method generalizes to any time dependence of the armlengths; it simplifies the modeling of gravitational-wave signals; and it allows a straightforward treatment of data gaps and missing measurements.'\nauthor:\n- Michele Vallisneri\n- 'Jean-Baptiste Bayle'\n- Stanislav Babak\n- Antoine Petiteau\nbibliography:\n- 'references.bib'\ntitle: 'TDI-$\\infty$: time-delay interferometry without delays'\n---\n\nIntroduction\n============\n\nInterferometry is not indispensable to the experiments that seek to detect gravitational waves (GWs) by monitoring the displacement of freely falling test masses. Sensitivity is set by disturbances to free fall" +"---\nabstract: |\n We consider the linear water-wave problem in a periodic channel $\\Pi^h \\subset \n {{\\mathbb R}}^3$, which is shallow except for a periodic array of deep potholes in it. Motivated by applications to surface wave propagation phenomena, we study the band-gap structure of the essential spectrum in the linear water-wave system, which includes the spectral Steklov boundary condition posed on the free water surface. We apply methods of asymptotic analysis, where the most involved step is the construction and analysis of an appropriate boundary layer in a neighborhood of the joint of the potholes with the thin part of the channel. Consequently, the existence of a spectral gap for small enough $h$ is proven.\naddress:\n- 'St. Petersburg State University, Universitetskaya nab. 7\u20139, St. Petersburg, 199034, Russia, and Institute for Problems in Mechanical Engineering of RAS, St. Petersburg, 199178, Russia '\n- 'Department of Mathematics, University of Helsinki, 00014 Helsinki, Finland'\nauthor:\n- 'Sergei A. Nazarov'\n- Jari Taskinen\ntitle: 'Band-gap structure of the spectrum of the water-wave problem in a shallow canal with a periodic family of deep pools'\n---\n\n[^1]\n\nIntroduction. {#sec1}\n=============\n\nFormulation of the water-wave problem. {#sec1.1}\n--------------------------------------\n\nLet $x = (y,z) \\in {{\\mathbb R}}^2" +"---\nabstract: 'We study the capabilities of Lyman\u00a0$\\beta$ and the O\u00a0[i]{} 1027 and 1028\u00a0\u00c5\u00a0spectral lines for understanding the properties of the chromosphere and transition region. The oxygen transitions are located in the wing of Lyman\u00a0$\\beta$ that is a candidate spectral line for the solar missions Solar Orbiter/SPICE and Solar-C (EUVST). We examine general spectroscopic properties of the three transitions in the quiet Sun by synthesizing them assuming non-local thermal equilibrium taking into account partial redistribution effects. We estimate the heights where the spectral lines are sensitive to the physical parameters computing the response functions to temperature and velocity using a 1D semi-empirical atmospheric model. We also synthesize the intensity spectrum using the 3D enhanced network simulation computed with the [Bifrost]{} code. The results indicate that Lyman\u00a0$\\beta$ is sensitive to the temperature from the middle chromosphere to the transition region while it is mainly sensitive to the line-of-sight velocity at the latter atmospheric layers, around 2000\u00a0km above the optical surface. The O\u00a0[i]{} lines form lower in the middle chromosphere, being sensitive to the LOS velocities at lower heights than those covered by Lyman\u00a0$\\beta$. The spatial distribution of intensity signals computed with the" +"---\nabstract: 'We present a numerical analysis supporting the evidence that the redshift evolution of the drifting coefficient of the field cluster mass function is capable of breaking several cosmic degeneracies. This evidence is based on the data from the [CoDECS]{} and [DUSTGRAIN]{}-[*pathfinder*]{} simulations performed separately for various non-standard cosmologies including coupled dark energy, $f(R)$ gravity and combinations of $f(R)$ gravity with massive neutrinos as well as for the standard $\\Lambda$CDM cosmology. We first numerically determine the field cluster mass functions at various redshifts in the range of $0\\le z\\le 1$ for each cosmology. Then, we compare the analytic formula developed in previous works with the numerically obtained field cluster mass functions by adjusting its drifting coefficient, $\\beta$, at each redshift. It is found that the analytic formula with the best-fit coefficient provides a good match to the numerical results at all redshifts for all of the cosmologies. The empirically determined redshift evolution of the drifting coefficient, $\\beta(z)$, turns out to significantly differ among different cosmologies. It is also shown that even without using any prior information on the background cosmology the drifting coefficient, $\\beta(z)$, can discriminate with high statistical significance the degenerate non-standard cosmologies not only from the $\\Lambda$CDM" +"---\nabstract: 'As of July 31, 2020, the COVID-19 pandemic has over 17 million reported cases, causing more than 667,000 deaths. Countries irrespective of economic status have succumbed to this pandemic. Many aspects of the lives, including health, economy, freedom of movement have been negatively affected by the coronavirus outbreak. Numerous strategies have been taken in order to prevent the outbreak. Some countries took severe resections in the form of full-scale lockdown, while others took a moderate approach of dealing with the pandemics, for example, mass testing, prohibiting large-scale public gatherings, restricting international travels. South America adopted primarily the lockdown strategies due to inadequate economy and health care support. Since the social interactions between the people are primarily affected by the lockdown, psychological distress, e.g. anxiety, stress, fear are supposedly affecting the South American population in a severe way. This paper aims to explore the impact of lockdown over the psychological aspect of the people of all the Spanish speaking South American capitals. We have utilized infodemiology approach by employing large-scale Twitter data-set over 33 million feeds in order to understand people\u2019s interaction over the months of this on-going coronavirus pandemic. Our result is surprising: at the beginning of the" +"---\nabstract: 'We present a new fast radio burst (FRB) at 920 MHz discovered during commensal observations conducted with the Australian Square Kilometre Array Pathfinder (ASKAP) as part of the Commensal Real-time ASKAP Fast Transients (CRAFT) survey. FRB\u00a0191001 was detected at a dispersion measure (DM) of 506.92(4) pc\u00a0cm$^{-3}$ and its measured fluence of 143(15) Jy\u00a0ms is the highest of the bursts localized to host galaxies by ASKAP to date. The subarcsecond localization of the FRB provided by ASKAP reveals that the burst originated in the outskirts of a highly star-forming spiral in a galaxy pair at redshift $z=0.2340(1)$. Radio observations show no evidence for a compact persistent radio source associated with the FRB\u00a0191001 above a flux density of $15\\upmu$Jy. However, we detect diffuse synchrotron radio emission from the disk of the host galaxy that we ascribe to ongoing star formation. FRB\u00a0191001 was also detected as an image-plane transient in a single 10 s snapshot with a flux density of 19.3\u00a0mJy in the low-time-resolution visibilities obtained simultaneously with CRAFT data. The commensal observation facilitated a search for repeating and slowly varying radio emissions 8 hr before and 1 hr after the burst. We found no" +"---\nabstract: 'We theoretically study the sound propagation in a two-dimensional weakly interacting uniform Bose gas. Using the classical fields approximation we analyze in detail the properties of density waves generated both in a weak and strong perturbation regimes. While in the former case density excitations can be described in terms of hydrodynamic or collisionless sound, the strong disturbance of the system results in a qualitatively different response. We identify observed structures as quasisolitons and uncover their internal complexity for strong perturbation case. For this regime quasisolitons break into vortex pairs as time progresses, eventually reaching an equilibrium state. We find this state, characterized by only fluctuating in time averaged number of pairs of opposite charge vortices and by appearance of a quasi-long-range order, as the Berezinskii-Kosterlitz-Thouless (BKT) phase.'\nauthor:\n- Krzysztof Gawryluk\n- 'Miros[\u0142]{}aw Brewczyk'\ntitle: 'Berezinskii-Kosterlitz-Thouless phase induced by dissipating quasisolitons'\n---\n\nIntroduction\n============\n\nSound waves carry information on both thermodynamic and transport properties of a medium they propagate through. In classical hydrodynamics, measuring the speed of sound waves and their attenuation gives an access to characteristics of the medium such as the compressibility and viscosity. In quantum hydrodynamics, with superfluids present, the picture is more complex [@PitaevskiiStringari]." +"---\nabstract: 'Metric ground navigation addresses the problem of autonomously moving a robot from one point to another in an obstacle-occupied planar environment in a collision-free manner. It is one of the most fundamental capabilities of intelligent mobile robots. This paper presents a standardized testbed with a set of environments and metrics to benchmark difficulty of different scenarios and performance of different systems of metric ground navigation. Current benchmarks focus on individual components of mobile robot navigation, such as perception and state estimation, but the navigation performance as a whole is rarely measured in a systematic and standardized fashion. As a result, navigation systems are usually tested and compared in an ad hoc manner, such as in one or two manually chosen environments. The introduced benchmark provides a general testbed for ground robot navigation in a metric world. The Benchmark for Autonomous Robot Navigation (BARN) dataset includes 300 navigation environments, which are ordered by a set of difficulty metrics. Navigation performance can be tested and compared in those environments in a systematic and objective fashion. This benchmark can be used to predict navigation difficulty of a new environment, compare navigation systems, and potentially serve as a cost function and a" +"---\nabstract: |\n Let $G$ be an $n$-vertex graph with the maximum degree $\\Delta$ and the minimum degree $\\delta$. We give algorithms with complexity $O(1.3158^{n-0.7~\\Delta(G)})$ and $O(1.32^{n-0.73~\\Delta(G)})$ that determines if $G$ is 3-colorable, when $\\delta(G)\\geq 8$ and $\\delta(G)\\geq 7$, respectively.\n\n [**Keywords: algorithms, complexity, proper coloring, 68W01, 68Q25, 05C15**]{}\nauthor:\n- 'Nicholas Crawford, Sogol Jahanbekam, and Katerina Potika'\ntitle: 'Improved algorithm to determine 3-colorability of graphs with the minimum degree at least 7'\n---\n\n\\#1\n\n\\#1\n\nIntroduction\n============\n\nA coloring of the vertices of a graph is *proper* if adjacent vertices receive different colors. A graph $G$ is $k$-*colorable* if it has a proper coloring using $k$ colors. The *chromatic number* of a graph $G$, written as $\\chi(G)$, is the smallest integer $k$ such that $G$ is $k$-colorable.\n\nThe proper coloring problem is one of the most studied problems in graph theory. To determine the chromatic number of a graph, one should find the smallest integer $k$ for which the graph is $k$-colorable. The $k$-colorability problem, for $k\\geq 3$, is one of the classical NP-complete problems [@W].\n\nEven approximating the chromatic number has been shown to be a very hard problem. Lund and Yannakakis [@LY] have shown that there is an" +"---\nabstract: 'This paper presents a convex programming approach to the optimization of a multistage launch vehicle ascent trajectory, from the liftoff to the payload injection into the target orbit, taking into account multiple nonconvex constraints, such as the maximum heat flux after fairing jettisoning and the splash-down of the burned-out stages. Lossless and successive convexification are employed to convert the problem into a sequence of convex subproblems. Virtual controls and buffer zones are included to ensure the recursive feasibility of the process and a state-of-the-art method for updating the reference solution is implemented to filter out undesired phenomena that may hinder convergence. A $hp$ pseudospectral discretization scheme is used to accurately capture the complex ascent and return dynamics with a limited computational effort. The convergence properties, computational efficiency, and robustness of the algorithm are discussed on the basis of numerical results. [The ascent of the VEGA launch vehicle toward a polar orbit is used as case study to discuss]{} the interaction between the heat flux and splash-down constraints. Finally, a sensitivity analysis of the launch vehicle carrying capacity to different splash-down locations is presented.'\nauthor:\n- |\n Boris Benedikter[^1], Alessandro Zavoli[^2], Guido Colasurdo[^3],\\\n Simone Pizzurro[^4], \u00a0and Enrico Cavallini[^5]\nbibliography:" +"---\nauthor:\n- 'Dennis Bonatsos[^1]'\n- \n- Andriana Martinou\n- 'S. Peroulis'\n- 'S. Sarantopoulou'\n- 'N. Minkov'\ntitle: 'Breaking SU(3) spectral degeneracies in heavy deformed nuclei'\n---\n\n[leer.eps]{} gsave 72 31 moveto 72 342 lineto 601 342 lineto 601 31 lineto 72 31 lineto showpage grestore\n\nProxy-SU(3) is an approximate symmetry appearing in heavy deformed nuclei [@proxy1; @proxy2]. The foundations of proxy-SU(3) [@Assimakis], its parameter-free predictions for the collective deformation parameters $\\beta$ and $\\gamma$ [@Bonatsos; @Martinou], as well as for $B(E2)$ ratios [@Martinou], have been discussed and its usefulness in explaining the dominance of prolate over oblate shapes in the ground states of even-even nuclei [@Sarantopoulou] and the point of the prolate to oblate shape transition in the rare earths region [@Sarantopoulou] has been demonstrated. In the present contribution, preliminary calculations for the spectra of heavy deformed nuclei, in which three-body and four-body operators are needed, will be discussed.\n\nSince Elliott demonstrated the relation of SU(3) symmetry to nuclear deformation [@Elliott1; @Elliott2], several group theoretical approaches to rotational nuclei have been developed. In theories approximating correlated valence nucleon pairs by bosons, like the Interacting Boson Model (IBM) [@IA], the ground state band (gsb) is sitting in the lowest-lying irreducible" +"---\nabstract: 'Given a finite covering of graphs $f : Y \\to X$, it is not always the case that $H_1(Y;\\mathbb{C})$ is spanned by lifts of primitive elements of $\\pi_1(X)$. In this paper, we study graphs for which this is not the case, and we give here the simplest known nontrivial examples of covers with this property, with covering degree as small as 128. Our first step is focusing our attention on the special class of graph covers where the deck group is a finite $p$-group. For such covers, there is a representation-theoretic criterion for identifying deck groups for which there exist covers with the property. We present an algorithm for determining if a finite $p$-group satisfies this criterion that uses only the character table of the group. Finally, we provide a complete census of all finite $p$-groups of rank $\\geq 3$ and order $< 1000$ satisfying this criterion, all of which are new examples.'\nauthor:\n- 'Destine Lee, Iris Rosenblum-Sellers, Jakwanul Safin, Anda Tenie'\nbibliography:\n- 'bibliography.bib'\ntitle: 'Graph coverings and (im)primitive homology: some new examples of exceptionally low degree'\n---\n\nIntroduction\n============\n\nGiven a graph cover $Y$ of a finite graph $X$ with finite deck group $G$, there" +"---\nabstract: 'We have used hydrodynamical simulations to model the formation of the closest giant elliptical galaxy, Centaurus A. We find that a single major merger event with a mass ratio up to 1.5, and which has happened $\\sim 2$ Gyr ago, is able to reproduce many of its properties, including galaxy kinematics, the inner gas disk, stellar halo ages and metallicities, and numerous faint features observed in the halo. The elongated halo shape is mostly made of progenitor residuals deposited by the merger, which also contribute to stellar shells observed in the Centaurus A halo. The current model also reproduces the measured Planetary Nebulae line of sight velocity and their velocity dispersion. Models with small mass ratio and relatively low gas fraction result in a de Vaucouleurs profile distribution, which is consistent with observations and model expectations. A recent merger left imprints in the age distribution that are consistent with the young stellar and Globular Cluster populations (2-4 Gyrs) found within the halo. We conclude that even if not all properties of Centaurus A have been accurately reproduced, a recent major merger has likely occurred to form the Centaurus A galaxy as we observe it at present-day.'\nauthor:\n-" +"---\nabstract: |\n We review the properties of fractals, the Mandelbrot set and how deterministic chaos ties to the picture. A detailed study on three body systems, one of the major applications of chaos theory was undertaken. Systems belonging to different families produced till date were studied and their properties were analysed. We then segregated them into three classes according to their properties. We suggest that such reviews be carried out in regular intervals of time as there are an infinite number of solutions for three body systems and some of them may prove to be useful in various domains apart from hierarchical systems.\n\n Key words - Celestial mechanics, Three-body problem, Gravitational interaction, Chaos, Orbits, Astronomical simulations\nauthor:\n- |\n T.S.Sachin Venkatesh\\\n [](mailto:tssachin.venkatesh@gmail.com)\n- Vishak Vikranth\nbibliography:\n- 'main.bib'\ntitle: Investigating the relation between chaos and the three body problem\n---\n\n\\[sec:intro\\]Introduction\n=========================\n\nExploring the connections between different theories of mathematics leads one through successive topics which diverge very little from one another, but looking at the the path as a whole, the initial point of probing and the final point have very little in common. So a quick introduction of the topics we covered is listed below\n\nFractals\n--------" +"---\nabstract: 'In the Paris agreement of 2015, it was decided to reduce the CO$_2$ emissions of the energy sector to zero by 2050 and to restrict the global mean temperature increase to $1.5^\\circ$C above the pre-industrial level. Such commitments are possible only with practically CO$_2$-free power generation based on variable renewable technologies. Historically, the main point of criticism regarding renewable power is the variability driven by weather dependence. Power-to-X systems, which convert excess power to other stores of energy for later use, can play an important role in offsetting the variability of renewable power production. In order to do so, however, these systems have to be scheduled properly to ensure they are being powered by low-carbon technologies. In this paper, we introduce a graphical approach for scheduling power-to-X plants in the day-ahead market by minimizing carbon emissions and electricity costs. This graphical approach is simple to implement and intuitively explain to stakeholders. In a simulation study using historical prices and CO$_2$ intensity for four different countries, we find that the price and CO$_2$ intensity tends to decrease with increasing scheduling horizon. The effect diminishes when requiring an increasing amount of full load hours per year. Additionally, investigating the trade-off" +"---\nabstract: 'LaAgSb$_{2}$ is a rare material, which offers the opportunity to investigate the complex interplay between charge density wave (CDW) ordering and topology protected electronic band structure. As both of these phenomena are governed by the structural symmetries, a comprehensive study of the lattice dynamics is highly desirable. In this report, we present the results of temperature and pressure dependent Raman spectroscopy and x-ray diffraction in single crystalline LaAgSb$_{2}$. Our results confirm that Raman spectroscopy is a highly sensitive tool to probe CDW ordering phenomenon, particularly the low-temperature second CDW transition in LaAgSb$_{2}$, which appears as a very weak anomaly in most experiments. The crystal orientation-dependent measurements provide the evolution of Raman modes with crystallographic symmetries and can be further studied through group symmetry analysis. The low-temperature x-ray diffraction data show the emergence of structural modulations corresponding to the CDW instability. The combined high-pressure Raman spectroscopy and synchrotron x-ray diffraction reveal multiple structural phase transitions through lowering of crystalline symmetries, which are also expected to lead to electronic topological transitions.'\nauthor:\n- Ratnadwip Singha\n- Sudeshna Samanta\n- Tara Shankar Bhattacharya\n- Swastika Chatterjee\n- Shubhankar Roy\n- Lin Wang\n- Achintya Singha\n- Prabhat Mandal\ntitle: 'Lattice dynamics" +"---\nabstract: 'Speaker recognition performance has been greatly improved with the emergence of deep learning. Deep neural networks show the capacity to effectively deal with impacts of noise and reverberation, making them attractive to far-field speaker recognition systems. The x-vector framework is a popular choice for generating speaker embeddings in recent literature due to its robust training mechanism and excellent performance in various test sets. In this paper, we start with early work on including invariant representation learning (IRL) to the loss function and modify the approach with centroid alignment (CA) and length variability cost (LVC) techniques to further improve robustness in noisy, far-field applications. This work mainly focuses on improvements for short-duration test utterances (1-8s). We also present improved results on long-duration tasks. In addition, this work discusses a novel self-attention mechanism. On the VOiCES far-field corpus, the combination of the proposed techniques achieves relative improvements of $7.0\\%$ for extremely short and $8.2\\%$ for full-duration test utterances on equal error rate (EER) over our baseline system.'\naddress: |\n $^1$Intel Labs\\\n $^2$Apple Inc.\\\n $^3$Technischen Hochschule N\u00fcrnberg\nbibliography:\n- 'mybib.bib'\ntitle: |\n Length- and Noise-aware Training Techniques\\\n for Short-utterance Speaker Recognition\n---\n\n**Index Terms**: speaker recognition, invariant representation learning, centroid alignment," +"---\nabstract: 'Electronic nearsightedness is one of the fundamental principles governing the behavior of condensed matter and supporting its description in terms of local entities such as chemical bonds. Locality also underlies the tremendous success of machine-learning schemes that predict quantum mechanical observables \u2013 such as the cohesive energy, the electron density, or a variety of response properties \u2013 as a sum of atom-centred contributions, based on a short-range representation of atomic environments. One of the main shortcomings of these approaches is their inability to capture physical effects, ranging from electrostatic interactions to quantum delocalization, which have a long-range nature. Here we show how to build a multi-scale scheme that combines in the same framework local and non-local information, overcoming such limitations. We show that the simplest version of such features can be put in formal correspondence with a multipole expansion of permanent electrostatics. The data-driven nature of the model construction, however, makes this simple form suitable to tackle also different types of delocalized and collective effects. We present several examples that range from molecular physics, to surface science and biophysics, demonstrating the ability of this multi-scale approach to model interactions driven by electrostatics, polarization and dispersion, as well as" +"---\nabstract: 'Robust performance of control schemes for open quantum systems is investigated under classical uncertainties in the generators of the dynamics and nonclassical uncertainties due to decoherence and initial state preparation errors. A formalism is developed to measure performance based on the transmission of a dynamic perturbation or initial state preparation error to the quantum state error. This makes it possible to apply tools from classical robust control such as structured singular value analysis. A difficulty arising from the singularity of the closed-loop Bloch equations for the quantum state is overcome by introducing the \\#-inversion lemma, a specialized version of the matrix inversion lemma. Under some conditions, this guarantees continuity of the structured singular value at $s = 0$. Additional difficulties occur when symmetry gives rise to multiple open-loop poles, which under symmetry-breaking unfold into single eigenvalues. The concepts are applied to systems subject to pure decoherence and a general dissipative system example of two qubits in a leaky cavity under laser driving fields and spontaneous emission. A nonclassical performance index, steady-state entanglement quantified by the concurrence, a nonlinear function of the system state, is introduced. Simulations confirm a conflict between entanglement, its log-sensitivity and stability margin under decoherence.'" +"---\nabstract: 'Here we present the results of an airborne 3-5.4 spectroscopic study of three young, Carbon-rich planetary nebulae IC 5117, PNG 093.9-00.1, and BD $+$30 3639. These observations were made using the grism spectroscopy mode of the FLITECAM instrument during airborne science operations onboard NASA\u2019s Stratospheric Observatory for Infrared Astronomy (SOFIA). The goal of this study is to characterize the 3.3 and 5.25 PAH dust emission in planetary nebulae and study the evolution of PAH features within evolved stars before their incorporation into new stellar systems in star-forming regions. Targets were selected from IRAS, KAO and ISO source lists, and were previously observed with FLITECAM on the 3-meter Shane telescope at Lick Observatory to allow direct comparison between the ground and airborne observations. We measure PAH emission equivalent width and central wavelength, classify the shape of the PAH emission, and determine the PAH/Aliphatic ratio for each target. The 3.3 PAH emission feature is observed in all three objects. PNG 093.9-00.1 exhibits NGC 7027-like aliphatic emission in the 3.4\u20133.6 region while IC 5117 and BD +30 3639 exhibit less aliphatic structure. All three PNs additionally exhibit PAH emission at 5.25 $\\mu$m.'\nauthor:\n- 'Erin C. Smith'\n- 'Sarah E. Logsdon'" +"---\nabstract: 'Lane detection is one of the most important tasks in self-driving. Due to various complex scenarios\u00a0(*e.g.*, severe occlusion, ambiguous lanes, *etc*.) and the sparse supervisory signals inherent in lane annotations, lane detection task is still challenging. Thus, it is difficult for the ordinary convolutional neural network (CNN) to train in general scenes to catch subtle lane feature from the raw image. In this paper, we present a novel module named REcurrent Feature-Shift Aggregator (RESA) to enrich lane feature after preliminary feature extraction with an ordinary CNN. RESA takes advantage of strong shape priors of lanes and captures spatial relationships of pixels across rows and columns. It shifts sliced feature map recurrently in vertical and horizontal directions and enables each pixel to gather global information. RESA can conjecture lanes accurately in challenging scenarios with weak appearance clues by aggregating sliced feature map. Moreover, we propose a Bilateral Up-Sampling Decoder that combines coarse-grained and fine-detailed features in the up-sampling stage. It can recover the low-resolution feature map into pixel-wise prediction meticulously. Our method achieves state-of-the-art results on two popular lane detection benchmarks\u00a0(CULane and Tusimple). Code has been made available at: https://github.com/ZJULearning/resa.'\nauthor:\n- 'Tu Zheng^1,2^[^1], Hao Fang^1^, Yi" +"---\nabstract: 'The consequences of the short range nature of the nucleon-nucleon interaction, which forces the spatial part of the nuclear wave function to be as symmetric as possible, on the pseudo-SU(3) scheme are examined through a study of the collective deformation parameters $\\beta$ and $\\gamma$ in the rare earth region. It turns out that beyond the middle of each harmonic oscillator shell possessing an SU(3) subalgebra, the highest weight irreducible representation (the hw irrep) of SU(3) has to be used, instead of the irrep with the highest eigenvalue of the second order Casimir operator of SU(3) (the hC irrep), while in the first half of each shell the two choices are identical. The choice of the hw irrep predicts a transition from prolate to oblate shapes just below the upper end of the rare earth region, between the neutron numbers $N=114$ and 116 in the W, Os, and Pt series of isotopes, in agreement with available experimental information, while the choice of the hC irrep leads to a prolate to oblate transition in the middle of the shell, which is not seen experimentally. The prolate over oblate dominance in the ground states of even-even nuclei is obtained as a" +"---\nabstract: 'We propose an analytical construction of observable functions in the extended dynamic mode decomposition (EDMD) algorithm. EDMD is a numerical method for approximating the spectral properties of the Koopman operator. The choice of observable functions is fundamental for the application of EDMD to nonlinear problems arising in systems and control. Existing methods either start from a set of dictionary functions and look for the subset that best fits the underlying nonlinear dynamics or they rely on machine learning algorithms to \u201clearn\u201d observable functions. Conversely, in this paper, we start from the dynamical system model and lift it through the Lie derivatives, rendering it into a polynomial form. This proposed transformation into a polynomial form is exact, and it provides an adequate set of observable functions. The strength of the proposed approach is its applicability to a broader class of nonlinear dynamical systems, particularly those with nonpolynomial functions and compositions thereof. Moreover, it retains the physical interpretability of the underlying dynamical system and can be readily integrated into existing numerical libraries. The proposed approach is illustrated with an application to electric power systems. The modeled system consists of a single generator connected to an infinite bus, where nonlinear terms" +"---\nabstract: 'In the present paper, we generalize the Markov triples in two different directions. One is generalization in direction of using the $q$-deformation of rational number introduced by [@MO] in connection with cluster algebras, quantum topology and analytic number theory. The other is a generalization in direction of using castling transforms on prehomogeneous vector spaces [@SaKi] which plays an important role in the study of representation theory and automorphic functions. In addition, the present paper gives a relationship between the two generalizations. This may provide some kind of bridging between different fields.'\nauthor:\n- |\n [Takeyoshi Kogiso]{}\\\n [Department of Mathematics, Josai University, ]{}\\\n [1-1, Keyakidai Sakado, Saitama, 350-0295, Japan]{}\\\n [E-mail address: kogiso@josai.ac.jp]{}\\\ntitle: '$q$-Deformations and $t$-Deformations of the Markov triples '\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nIt is well known that Markov triple $(x,y,z)=(\\mathrm{Tr}(w(A,B))/3, \\mathrm{Tr}(w(A,B) w'(A,B))/3, \\mathrm{Tr}(w'(A,B))/3)$ as a solution of Markov equation $x^{2} + y^{2} +z^{2} =3xyz$ for a triple of Christoffel $ab$-word $(w(a,b), w(a,b)w'(a,b), w' (a,b))$, where $A=\\begin{pmatrix} \n2 & 1 \\\\ \n1 & 1 \n\\end{pmatrix} , ~\nB=\\begin{pmatrix} \n5 & 2 \\\\\n2 & 1 \n\\end{pmatrix}$.\n\nThis paper introduces two generalizations of Markov triple in two completely different directions and the relation between them.\n\nSophie Morier-Genoud" +"---\nabstract: 'High-quality, usable, and effective software is essential for supporting astronomers in the discovery-focused tasks of data analysis and visualisation. As the volume, and perhaps more crucially, the velocity of astronomical data grows, the role of the astronomer is changing. There is now an increased reliance on automated and autonomous discovery and decision-making workflows rather than visual inspection. We assert the need for an improved understanding of how astronomers (humans) currently make visual discoveries from data. This insight is a critical element for the future design, development and effective use of cyber-human discovery systems, where astronomers work in close collaboration with automated systems to gain understanding from continuous, real-time data streams. We discuss how relevant human performance data could be gathered, specifically targeting the domains of expertise and skill at visual discovery, and the identification and management of cognitive factors. By looking to other disciplines where human performance is assessed and measured, we propose four early-stage applications that would: (1) allow astronomers to evaluate, and potentially improve, their own visual discovery skills; (2) support just-in-time coaching; (3) enable talent identification; and (4) result in user interfaces that automatically respond to skill level and cognitive state. Throughout, we advocate for" +"---\nabstract: 'Assuming the best numerical value for the cosmic baryonic density and the existence of three neutrino flavors, standard big bang nucleosynthesis is a parameter-free model. It is important to assess if the observed primordial abundances can be reproduced by simulations. Numerous studies have shown that the simulations overpredict the primordial $^7$Li abundance by a factor of $\\approx$ $3$ compared to the observations. The discrepancy may be caused by unknown systematics in $^7$Li observations, poorly understood depletion of lithium in stars, errors in thermonuclear rates that take part in the lithium and beryllium synthesis, or physics beyond the standard model. Here, we focus on the likelihood of a nuclear physics solution. The status of the key nuclear reaction rates is summarized. Big bang nucleosynthesis simulations are performed with the most recent reaction rates and the uncertainties of the predicted abundances are established using a Monte Carlo technique. Correlations between abundances and reaction rates are investigated based on the metric of mutual information. The rates of four reactions impact the primordial $^7$Li abundance: $^3$He($\\alpha$,$\\gamma$)$^7$Be, d(p,$\\gamma$)$^3$He, $^7$Be(d,p)2$\\alpha$, and $^7$Be(n,p)$^7$Li. We employ a genetic algorithm to search for simultaneous rate changes in these four reactions that may account for all observed primordial" +"---\nabstract: |\n We introduce a discrete-time search game, in which two players compete to find an object first. The object moves according to a time-varying Markov chain on finitely many states. The players know the Markov chain and the initial probability distribution of the object, but do not observe the current state of the object. The players are active in turns. The active player chooses a state, and this choice is observed by the other player. If the object is in the chosen state, this player wins and the game ends. Otherwise, the object moves according to the Markov chain and the game continues at the next period.\n\n We show that this game admits a value, and for any error-term ${\\varepsilon}>0$, each player has a pure (subgame-perfect) ${\\varepsilon}$-optimal strategy. Interestingly, a 0-optimal strategy does not always exist. The ${\\varepsilon}$-optimal strategies are robust in the sense that they are $2{\\varepsilon}$-optimal on all finite but sufficiently long horizons, and also $2{\\varepsilon}$-optimal in the discounted version of the game provided that the discount factor is close to 1. We derive results on the analytic and structural properties of the value and the ${\\varepsilon}$-optimal strategies. Moreover, we examine the performance of the finite" +"---\nabstract: 'For an interactive agent, such as task-oriented spoken dialog systems or chatbots, measuring and adapting to Customer Satisfaction (CSAT) is critical in order to understand user perception of an agent\u2019s behavior and increase user engagement and retention. However, an agent often relies on explicit customer feedback for measuring CSAT. Such explicit feedback may result in potential distraction to users and it can be challenging to capture continuously changing user\u2019s satisfaction. To address this challenge, we present a new approach to automatically estimate CSAT using acoustic and lexical information in the Alexa Prize Socialbot data. We first explore the relationship between CSAT and sentiment scores at both the utterance and conversation level. We then investigate static and temporal modeling methods that use estimated sentiment scores as a mid-level representation. The results show that the sentiment scores, particularly valence and satisfaction, are correlated with CSAT. We also demonstrate that our proposed temporal modeling approach for estimating CSAT achieves competitive performance, relative to static baselines as well as human performance. This work provides insights into open domain social conversations between real users and socialbots, and the use of both acoustic and lexical information for understanding the relationship between CSAT and sentiment" +"---\nabstract: 'We study the phase diagram of memristive circuit models in the replica-symmetric case using a novel Lyapunov function for the dynamics of these devices. Effectively, the model we propose is an Ising model with *interacting* quenched disorder, which we study at the first order in a control parameter. Notwithstanding these limitations, we find a complex phase diagram and a glass-ferromagnetic transition in the parameter space which generalizes earlier mean field theory results for a simpler model. Our results suggest a non-trivial landscape of asymptotic states for memristive circuits.'\nauthor:\n- 'F. Caravelli'\n- 'F. C. Sheldon'\ntitle: Phases of memristive circuits via an interacting disorder approach\n---\n\nIntroduction\n============\n\nThere are several intriguing features, still unexplored, in the dynamical analysis of circuits with memory, and in particular memristors [@reviewCarCar; @Rev1; @Rev2]. An essential property of a memristor is its (pinched at the origin) hysteretic behavior in the voltage-current diagram and the non-linearity of the component. Physical memristors [@chua71; @chua76; @stru8; @Valov; @stru13] have rather non-trivial voltage-current curves, but some core features are captured by a simple description which we adopt in this paper. The state of the resistance between two limiting values can be parametrized by a parameter" +"---\nabstract: 'In this work we examine refraction of light by computing full solutions to axion electrodynamics. We also allow for the possibility of an additional plasma component. We then specialise to wavelengths which are small compared to background scales to determine if refraction can be described by geometric optics. In the absence of plasma, for small incidence angles relative to the optical axis, axion electrodynamics and geometric optics are in good agreement, with refraction occurring at $\\mathcal{O}(g_{a \\gamma \\gamma}^2)$. However, for rays which lie far from the optical axis, the agreement with geometric optics breaks down and the dominant refraction requires a full wave-optical treatment, occurring at $\\mathcal{O}(g_{a \\gamma \\gamma})$. In the presence of sufficiently large plasma masses, the wave-like nature of light becomes suppressed and geometric optics is in good agreement with the full theory for all rays. Our results therefore suggest the necessity of a more comprehensive study of lensing and ray-tracing in axion backgrounds, including a full account of the novel $\\mathcal{O}(g_{a \\gamma \\gamma})$ wave-optical contribution to refraction.'\nauthor:\n- 'Jamie\u00a0I.\u00a0McDonald'\n- 'Lu[\u00ed]{}s B.\u00a0Ventura'\nbibliography:\n- 'References.bib'\ntitle: Bending of light in axion backgrounds\n---\n\nIntroduction\n============\n\nAxions [@Peccei:1977hh; @Weinberg:1977ma; @Wilczek:1977pj; @Conlon:2006tq; @Svrcek:2006yi]" +"---\nabstract: 'Every year, more and more objects are sent to space. While staying in orbit at high altitudes, objects at low altitudes reenter the atmosphere, mostly disintegrating and adding material to the upper atmosphere. The increasing number of countries with space programs, advancing commercialization, and ambitious satellite constellation projects raise concerns about space debris in the future and will continuously increase the mass flux into the atmosphere. In this study, we compare the mass influx of human-made (anthropogenic) objects to the natural mass flux into Earth\u2019s atmosphere due to meteoroids, originating from solar system objects like asteroids and comets. The current and near future significance of anthropogenic mass sources is evaluated, considering planned and already partially installed large satellite constellations. Detailed information about the mass, composition, and ablation of natural and anthropogenic material are given, reviewing the relevant literature. Today, anthropogenic material does make up about 2.8% compared to the annual injected mass of natural origin, but future satellite constellations may increase this fraction to nearly 40%. For this case, the anthropogenic injection of several metals prevails the injection by natural sources by far. Additionally, we find that the anthropogenic injection of aerosols into the atmosphere increases disproportionately. All" +"---\nabstract: |\n Based on the idea that the components of a cosmological metric may be determined by the total gravitational potential of the universe, the scalar field $\\phi=1/G$ in the Jordan-Brans-Dicke (JBD) theory is introduced as evolving with the inverse square of the scale factor. Since the gravitational potential is related to the field $\\phi$ resulting from Mach\u2019s principle and depends on time due to the expansion of space, the temporal evolution of the field should be in accord with the evolution of time and space intervals in the metric tensor. For the same reason, the time dependence of the field makes these comoving intervals relative for different points on the time axis. Thus, it is shown that introduction of the cosmic gravitational potential as a time dependent scalar field proportional to $1/a^2$ may resolve the flatness, the horizon and the late-time accelerating expansion problems of the standard model of cosmology. The luminosity distance vs redshift data of Type Ia supernovae is in agreement with this approach.\\\n **Keywords:** *Jordan Brans Dicke theory; flatness and horizon problems; late-time accelerating expansion.*\nauthor:\n- |\n Onder Dunya[^1] and Metin Arik[^2]\\\n *Department of Physics, Bogazici University, Bebek, Istanbul, Turkey*\nbibliography:\n- 'references.bib'\nnocite:" +"---\nauthor:\n- 'A. Sicilia-Aguilar'\n- 'J. Bouvier'\n- 'C. Dougados'\n- 'K. Grankin'\n- 'J. F. Donati'\ndate: 'Submitted May 25, 2020. Accepted August 31, 2020'\nsubtitle: 'Disk emission, wind, ad accretion during the Z\u00a0CMa NW outburst[^1]'\ntitle: 'Reading between the lines:'\n---\n\n[We use optical spectroscopy to investigate the disk, wind, and accretion during the 2008 Z\u00a0CMa NW outburst.]{} [Emission lines were used to constrain the locations, densities, and temperatures of the structures around the star.]{} [More than 1000 optical emission lines reveal accretion, a variable, multicomponent wind, and double-peaked lines of disk origin. The variable, non-axisymmetric, accretion-powered wind has slow ($\\sim $0 km s$^{-1}$), intermediate ($\\sim -$100 km s$^{-1}$), and fast ($\\geq -$400 km s$^{-1}$) components. The fast components are of stellar origin and disappear in quiescence, while the slow component is less variable and could be related to a disk wind. The changes in the optical depth of the lines between outburst and quiescence reveal that increased accretion is responsible for the observed outburst. We derive an accretion rate of 10$^{-4}$ M$_\\odot$/yr in outburst. The Fe I and weak Fe II lines arise from an irradiated, flared disk at $\\sim$0.5-3 $\\times$M$_*$/16 M$_\\odot$ au with" +"---\nabstract: 'Nuclear state densities are important inputs to statistical models of compound-nucleus reactions. State densities are often calculated with self-consistent mean-field approximations that do not include important correlations and have to be augmented with empirical collective enhancement factors. Here, we benchmark the static-path plus random-phase approximation (SPA+RPA) to the state density in a chain of samarium isotopes $^{148-155}$Sm against exact results (up to statistical errors) obtained with the shell model Monte Carlo (SMMC) method. The SPA+RPA method incorporates all static fluctuations beyond the mean field together with small-amplitude quantal fluctuations around each static fluctuation. Using a pairing plus quadrupole interaction, we show that the SPA+RPA state densities agree well with the exact SMMC densities for both the even- and odd-mass isotopes. For the even-mass isotopes, we also compare our results with mean-field state densities calculated with the finite-temperature Hartree-Fock-Bogoliubov (HFB) approximation. We find that the SPA+RPA repairs the deficiencies of the mean-field approximation associated with broken rotational symmetry in deformed nuclei and the violation of particle-number conservation in the pairing condensate. In particular, in deformed nuclei the SPA+RPA reproduces the rotational enhancement of the state density relative to the mean-field state density.'\nauthor:\n- 'P. Fanto and Y. Alhassid'" +"---\nabstract: 'The graph matching problem emerges naturally in various applications such as [web]{} privacy, image processing and computational biology. In this paper[,]{} graph matching is considered under a stochastic model, where a pair of randomly generated graphs with pairwise correlated edges are to be matched [such that given the labeling of the vertices in the first graph, the labels in the second graph are recovered by leveraging the correlation among their edges.]{} The problem is considered under various settings and graph models. In the first step, the Correlated Erd\u00f6s-R\u00e9nyi (CER) graph model is studied, where all edge pairs whose vertices have similar labels are generated based on identical distributions and independently of other edges. A matching scheme called the *typicality matching scheme* is introduced. The scheme operates by investigating the joint typicality of the adjacency matrices of the two graphs. New results on the typicality of permutations of sequences lead to necessary and sufficient conditions for successful matching based on the parameters of the CER model. In the next step, the results are extended to [graphs with community structure generated based on the Stochastic Block Model (SBM)]{}. The [SBM]{} model is a generalization of the [CER]{} model where each" +"---\nabstract: 'Microgrids (MGs) are small-scale power systems which interconnect distributed energy resources and loads within clearly defined regions. However, the digital infrastructure used in an MG to relay sensory information and perform control commands can potentially be compromised due to a cyberattack from a capable adversary. An MG operator is interested in knowing the inherent vulnerabilities in their system and should regularly perform Penetration Testing (PT) activities to prepare for such an event. PT generally involves looking for defensive coverage blindspots in software and hardware infrastructure, however the logic in control algorithms which act upon sensory information should also be considered in PT activities. This paper demonstrates a case study of PT for an MG control algorithm by using Reinforcement Learning (RL) to uncover malicious input which compromises the effectiveness of the controller. Through trial-and-error episodic interactions with a simulated MG, we train an RL agent to find malicious input which reduces the effectiveness of the MG controller.'\nauthor:\n- \nbibliography:\n- '\\\\jobname.bib'\ntitle: Reinforcement Learning Based Penetration Testing of a Microgrid Control Algorithm\n---\n\nmicrogrid, cybersecurity, false data injection, penetration testing, mathematical optimization, reinforcement learning\n\nIntroduction\n============\n\nPenetration Testing (PT) is the process of performing an authorized attack" +"---\nabstract: 'Time-dependent driving influences the quantum and thermodynamic fluctuations of a system, changing the familiar physical picture of electronic noise which is an important source of information about the microscopic mechanism of quantum transport. Giving access to all cumulants of the current, the full counting statistics (FCS) is the powerful theoretical method to study fluctuations in nonequilibrium quantum systems. In this paper, we propose the application of FCS to consider periodic driven junctions. The combination of Floquet theory for time dynamics and nonequilibrium counting-field Green\u2019s functions enables the practical formulation of FCS for the system. The counting-field Green\u2019s functions are used to compute the moment generating function, allowing for the calculation of the time-averaged cumulants of the electronic current. The theory is illustrated using different transport scenarios in model systems.'\nauthor:\n- 'Thomas D. Honeychurch'\n- 'Daniel S. Kosov'\nbibliography:\n- 'lib\\_final.bib'\ntitle: Full counting statistics for electron transport in periodically driven quantum dots\n---\n\nIntroduction\n============\n\nTime-dependent phenomena play an important part in the investigation and application of nanoscale electronics. The dynamical response of a junction to the modulation of a voltage or to the irradiation by a light source offers intriguing means of probing and controlling the" +"---\nabstract: 'The success of Deep Learning has created a surge in interest in a wide range of Natural Language Generation (NLG) tasks. Deep Learning has not only pushed the state of the art in several existing NLG tasks but has also facilitated researchers to explore various newer NLG tasks such as image captioning. Such rapid progress in NLG has necessitated the development of accurate automatic evaluation metrics that would allow us to track the progress in the field of NLG. However, unlike classification tasks, automatically evaluating NLG systems in itself is a huge challenge. Several works have shown that early heuristic-based metrics such as BLEU, ROUGE are inadequate for capturing the nuances in the different NLG tasks. The expanding number of NLG models and the shortcomings of the current metrics has led to a rapid surge in the number of evaluation metrics proposed since 2014. Moreover, various evaluation metrics have shifted from using pre-determined heuristic-based formulae to trained transformer models. This rapid change in a relatively short time has led to the need for a survey of the existing NLG metrics to help existing and new researchers to quickly come up to speed with the developments that have happened" +"---\nabstract: 'We consider the inverse spectral theory of vibrating string equations. In this regard, first eigenvalue Ambarzumyan-type uniqueness theorems are stated and proved subject to separated, self-adjoint boundary conditions. More precisely, it is shown that there is a curve in the boundary parameters\u2019 domain on which no analog of it is possible. Necessary conditions of the $n$-th eigenvalue are identified, which allows to state the theorems. In addition, several properties of the first eigenvalue are examined. Lower and upper bounds are identified, and the areas are described in the boundary parameters\u2019 domain on which the sign of the first eigenvalue remains unchanged. This paper contributes to inverse spectral theory as well as to direct spectral theory.'\nauthor:\n- 'Yuri Ashrafyan[^1] \u00a0and \u00a0Dominik L.\u00a0Michels[^2]'\nbibliography:\n- 'References.bib'\ntitle: 'On Ambarzumyan-type Inverse Problems of Vibrating String Equations'\n---\n\n*Keywords:* Ambarzumyan theorem, first eigenvalue, inverse problems, vibrating string equations.\n\n*MSC 2010:* 34A55, 34L15\n\nIntroduction {#sec1}\n============\n\nWhen dealing with direct problems, one considers a physical model and calculates a specific output given a specific input. In contrast, inverse problems are dealing with the inversion of this model based on measured or observed outputs, i.e.\u00a0we consider a mathematical framework which is" +"---\nauthor:\n- Olivier Del Fabbro\n- 'Patrik Christen[^1]'\ndate: '31 August 2020 (last revised 25 September 2021)'\ntitle: 'Philosophy-Guided Modelling and Implementation of Adaptation and Control in Complex Systems'\n---\n\nAdaptation; control; complex systems modelling and simulation; cybernetics; meta-modelling; meta-programming; philosophy of individuation; philosophy of organism.\n\nIntroduction\n============\n\nCybernetics was from its very beginning a scientific endeavor that tried to bring together as many disciplines as possible. Already in the 1943 written paper, *Behavior, Purpose and Teleology*, Norbert Wiener, Arturo Rosenblueth, and Julian Bigelow [@Rosenblueth.1943], mixed concepts from physiology, neuropsychology, engineering, and even philosophy (teleology) in order to highlight analogies between living beings and machines. Active and purposeful behaviour leads to an output of signals which can be fed back to the system so that the teleology, i.e. the end, aim, goal, finality, of the system under consideration is altered. In the end, it does not matter if that system is the human nervous system or an anti-aircraft defence system.\n\nYet, what seems to be missing in that publication and also in Wiener\u2019s 1948 published monograph, *Cybernetics* [@Wiener.1948], is the interpretation of feedback and control in biological terms. It was left to others in the cybernetic community such as" +"---\nabstract: 'Let $X$ be a compact [K[\u00e4]{}hler]{}space with klt singularities and vanishing first Chern class. We prove the Bochner principle for holomorphic tensors on the smooth locus of $X$: any such tensor is parallel with respect to the singular Ricci-flat metrics. As a consequence, after a finite quasi-\u00e9tale cover $X$ splits off a complex torus of the maximum possible dimension. We then proceed to decompose the tangent sheaf of $X$ according to its holonomy representation. In particular, we classify those $X$ which have strongly stable tangent sheaf: up to quasi-\u00e9tale covers, these are either irreducible Calabi\u2013Yau or irreducible holomorphic symplectic. As an application of these results, we show that if $X$ has dimension four, then it satisfies Campana\u2019s Abelianity Conjecture.'\naddress:\n- 'Univ Rennes, CNRS, IRMAR \u2014 UMR 6625, F\u201335000 Rennes, France et Institut Universitaire de France'\n- 'Lehrstuhl f\u00fcr Mathematik I, Universit\u00e4t Bayreuth, 95440 Bayreuth, Germany'\n- 'Institut de Math\u00e9matiques de Toulouse, Universit\u00e9 Paul Sabatier, 31062 Toulouse Cedex 9, France'\n- 'Lehrstuhl f\u00fcr Mathematik VIII, Universit\u00e4t Bayreuth, 95440 Bayreuth, Germany'\nauthor:\n- Beno\u00eet Claudon\n- Patrick Graf\n- Henri Guenancia\n- Philipp Naumann\nbibliography:\n- 'biblio.bib'\n- '20\\_Literatur.bib'\ntitle: 'K[\u00c4]{}hler spaces with zero first Chern class: Bochner principle," +"---\nabstract: 'Record Dynamics (RD) deals with complex systems evolving through a sequence of metastable stages. These are macroscopically distinguishable and appear stationary, except for the sudden and rapid changes, called quakes, which induce the transitions from one stage to the next. This phenomenology is well known in physics as \u201cphysical aging\u201d, but from the vantage point of RD the evolution of a class of systems of physical, biological and cultural origin is rooted in a hierarchically structured configuration space and can therefore be analyzed by similar statistical tools. This colloquium paper strives to present in a coherent fashion methods and ideas that have gradually evolved over time. To this end, it first describes the differences and similarities between RD and two widespread paradigms of complex dynamics, Self Organized Criticality and Continuous Time Random Walks. It then outlines the Poissonian nature of records events in white noise time series, and connects it to the statistics of quakes in metastable hierarchical systems, arguing that the relaxation effects of quakes can generally be described by power laws unrelated to criticality. Several different applications of RD have been developed over the years. Some of these are described, showing the basic RD hypothesis, the" +"---\nabstract: 'We analyze extremes of traffic flow profiles composed of traffic counts over a day. The data is essentially curves and determining which trajectory should be classified as extreme is not straight forward. To assess the extremes of the traffic flow curves in a coherent way, we use a directional definition of extremeness and apply the dimension reduction technique called principal component analysis (PCA) in an asymmetric norm. In the classical PCA one reduces the dimensions of the data by projecting it in the direction of the largest variation of the projection around its mean. In the PCA in an asymmetric norm one chooses the projection directions, such that the asymmetrically weighted variation around a tail index \u2013 an expectile \u2013 of the data is the largest possible. Expectiles are tail measures that generalize the mean in a similar manner as quantiles generalize the median. Focusing on the asymmetrically weighted variation around an expectile of the data, we find the appropriate projection directions and the low dimensional representation of the traffic flow profiles that uncover different patterns in their extremes. Using the traffic flow data from the roundabout on Ernst-Reuter-Platz in the city center of Berlin, Germany, we estimate," +"---\nabstract: 'The continuum-scale electrokinetic porous-media flow and excess charge redistribution equations are uncoupled using eigenvalue decomposition. The uncoupling results in a pair of independent diffusion equations for \u201cintermediate\u201d potentials subject to modified material properties and boundary conditions. The fluid pressure and electrostatic potential are then found by recombining the solutions to the two intermediate uncoupled problems in a matrix-vector multiply. Expressions for the material properties or source terms in the intermediate uncoupled problem may require extended precision or careful re-writing to avoid numerical cancellation, but the solutions themselves can be computed in typical double precision. The approach works with analytical or gridded numerical solutions and is illustrated through two examples. The solution for flow to a pumping well is manipulated to predict streaming potential and electroosmosis, and a periodic one-dimensional analytical solution is derived and used to predict electroosmosis and streaming potential in a laboratory flow cell subjected to low frequency alternating current and pressure excitation. The examples illustrate the utility of the eigenvalue decoupling approach, repurposing existing analytical solutions and leveraging simpler-to-derive solutions or numerical models for coupled physics.'\nauthor:\n- 'Kristopher L. Kuhlman'\n- Bwalya Malama\ntitle: Uncoupling electrokinetic flow solutions\n---\n\nIntroduction\n============\n\nCoupled physical phenomena" +"---\nauthor:\n- |\n \\\n National Institute for Space Research, Sao Jose dos Campos, Brazil\\\n E-mail:\n- |\n Liliana E. Rivera Sandoval\\\n Texas Tech University, Lubbock, USA\\\n E-mail:\nbibliography:\n- 'references.bib'\ntitle: Properties of Cataclysmic Variables in Globular Clusters\n---\n\nfirstaubox\n\nWhy should one investigate cataclysmic variables in globular clusters? {#SecINTRO}\n======================================================================\n\nThe study of star clusters plays an important role in our understanding of the Universe since these systems are natural laboratories for testing theories of stellar dynamics and evolution. Particularly, globular clusters (GCs) are one of the most important objects for studying the formation and the physical nature of exotic systems given that they are nearly as old as the Universe itself and they can reach very high stellar densities (up to $\\sim10^6$ stars per pc$^3$ in their cores, Harris 1996\u00a0[@Harris_1996]). Because stellar dynamics plays an important role in these environments, GCs are factories of exotic systems such as low mass X-ray binaries, cataclysmic variables, active binaries, red and blue straggler stars, millisecond pulsars, black hole binaries, among others. Thus, the study of these binaries provides key information and tools that can help us to understand the formation and evolution processes of star clusters themselves, which, in" +"---\nabstract: 'Existing works address the problem of generating high frame-rate sharp videos by separately learning the frame deblurring and frame interpolation modules. Most of these approaches have a strong prior assumption that all the input frames are blurry whereas in a real-world setting, the quality of frames varies. Moreover, such approaches are trained to perform either of the two tasks - deblurring or interpolation - in isolation, while many practical situations call for both. Different from these works, we address a more realistic problem of high frame-rate sharp video synthesis with no prior assumption that input is always blurry. We introduce a novel architecture, Adaptive Latent Attention Network (ALANET), which synthesizes sharp high frame-rate videos with no prior knowledge of input frames being blurry or not, thereby performing the task of both deblurring and interpolation. We hypothesize that information from the latent representation of the consecutive frames can be utilized to generate optimized representations for both frame deblurring and frame interpolation. Specifically, we employ combination of self-attention and cross-attention module between consecutive frames in the latent space to generate optimized representation for each frame. The optimized representation learnt using these attention modules help the model to generate and interpolate" +"---\nauthor:\n- \nbibliography:\n- 'bibliography.bib'\ntitle: ' **Data Sanitisation Protocols for the Privacy Funnel with Differential Privacy Guarantees**'\n---\n\n***Keywords\u2014Privacy funnel; local differential privacy; information privacy; database sanitisation; complexity.***\n\nIntroduction\n============\n\nThis paper is an extended version of [@lopuhaa2020privacy]. Under the Open Data paradigm, governments and other public organisations want to share their collected data with the general public. This increases a government\u2019s transparency, and it also gives citizens and businesses the means to participate in decision-making, as well as using the data for their own purposes. However, while the released data should be as faithful to the raw data as possible, individual citizens\u2019 private data should not be compromised by such data publication.\n\nLet $\\mathcal{X}$ be a finite set. Consider a database $\\vec{X} = (X_1,\\ldots,X_n) \\in \\mathcal{X}^n$ owned by a data aggregator, containing a data item $X_i \\in \\mathcal{X}$ for each user $i$ (For typical database settings, each user\u2019s data is a vector of attributes $X_i = (X_i^1,\\ldots,X_i^m)$; we will consider this in more detail in Section \\[sec:mult\\]). This data may not be considered sensitive by itself, but it might be correlated to a secret $S_i$. For instance, $X_i$ might contain the age, sex, weight, skin colour, and" +"---\nabstract: 'We present a direct comparison of the Pan-Andromeda Archaeological Survey (PAndAS) observations of the stellar halo of M31 with the stellar halos of 6 galaxies from the Auriga simulations. We process the simulated halos through the [Auriga2PAndAS]{} pipeline and create PAndAS-like mocks that fold in all observational limitations of the survey data (foreground contamination from the Milky Way stars, incompleteness of the stellar catalogues, photometric uncertainties, etc). This allows us to study the survey data and the mocks in the same way and generate directly comparable density maps and radial density profiles. We show that the simulations are overall compatible with the observations. Nevertheless, some systematic differences exist, such as a preponderance for metal-rich stars in the mocks. While these differences could suggest that M31 had a different accretion history or has a different mass compared to the simulated systems, it is more likely a consequence of an under-quenching of the star formation history of galaxies, related to the resolution of the [Auriga]{} simulations. The direct comparison enabled by our approach offers avenues to improve our understanding of galaxy formation as they can help pinpoint the observable differences between observations and simulations. Ideally, this approach will be further" +"---\nabstract: 'For many-body methods such as MCSCF and CASSCF, in which the number of one-electron orbitals are optimized and independent of basis set used, there are no problems with using plane-wave basis sets. However, for methods currently used in quantum computing such as select configuration interaction (CI) and coupled cluster (CC) methods, it is necessary to have a virtual space that is able to capture a significant amount of electron-electron correlation in the system. The virtual orbitals in a pseudopotential plane-wave Hartree\u2013Fock calculation, because of Coulomb repulsion, are often scattering states that interact very weakly with the filled orbitals. As a result, very little correlation energy is captured from them. The use of virtual spaces derived from the one-electron operators have also been tried, and while some correlation is captured, the amount is quite low. To overcome these limitations, we have been developing new classes of algorithms to define virtual spaces by optimizing orbitals from small pairwise CI Hamiltonians, which we term as correlation optimized virtual orbitals with the abbreviation COVOs. With these procedures we have been able to derive virtual spaces, containing only a few orbitals, that are able to capture a significant amount of correlation. Besides, using" +"---\nabstract: |\n Existing results for low-rank matrix recovery largely focus on quadratic loss, which enjoys favorable properties such as restricted strong convexity/smoothness (RSC/RSM) and well conditioning over all low rank matrices. However, many interesting problems involve more general, non-quadratic losses, which do not satisfy such properties. For these problems, standard nonconvex approaches such as rank-constrained projected gradient descent (a.k.a.\u00a0iterative hard thresholding) and Burer-Monteiro factorization could have poor empirical performance, and there is no satisfactory theory guaranteeing global and fast convergence for these algorithms.\n\n In this paper, we show that a critical component in provable low-rank recovery with non-quadratic loss is a regularity projection oracle. This oracle restricts iterates to low-rank matrices within an appropriate bounded set, over which the loss function is well behaved and satisfies a set of approximate RSC/RSM conditions. Accordingly, we analyze an (averaged) projected gradient method equipped with such an oracle, and prove that it converges globally and linearly. Our results apply to a wide range of non-quadratic low-rank estimation problems including one bit matrix sensing/completion, individualized rank aggregation, and more broadly generalized linear models with rank constraints.\nauthor:\n- 'Lijun Ding[^1], Yuqian Zhang[^2], and Yudong Chen[^3]'\nbibliography:\n- 'references.bib'\ntitle: 'Low-rank matrix recovery" +"---\nabstract: 'Deep learning has drawn a lot of interest in recent years due to its effectiveness in processing big and complex observational data gathered from diverse instruments. Here we propose a new deep learning method, called SolarUnet, to identify and track solar magnetic flux elements or features in observed vector magnetograms based on the Southwest Automatic Magnetic Identification Suite (SWAMIS). Our method consists of a data pre-processing component that prepares training data from the SWAMIS tool, a deep learning model implemented as a U-shaped convolutional neural network for fast and accurate image segmentation, and a post-processing component that prepares tracking results. SolarUnet is applied to data from the 1.6 meter Goode Solar Telescope at the Big Bear Solar Observatory. When compared to the widely used SWAMIS tool, SolarUnet is faster while agreeing mostly with SWAMIS on feature size and flux distributions, and complementing SWAMIS in tracking long-lifetime features. Thus, the proposed physics-guided deep learning-based tool can be considered as an alternative method for solar magnetic tracking.'\nauthor:\n- Haodi Jiang\n- Jiasheng Wang\n- Chang Liu\n- Ju Jing\n- Hao Liu\n- 'Jason T. L. Wang'\n- Haimin Wang\nbibliography:\n- 'reference.bib'\ntitle: '[**Identifying and Tracking Solar Magnetic" +"---\nabstract: 'We study the statistical properties of Active Ornstein Uhlenbeck particles (AOUPs). In this simplest of models, the Gaussian white noise of overdamped Brownian colloids is replaced by a Gaussian colored noise. This suffices to grant this system the hallmark properties of active matter, while still allowing for analytical progress. We study in detail the steady-state distribution of AOUPs in the small persistence time limit and for spatially varying activity. At the collective level, we show AOUPs to experience motility-induced phase separation both in the presence of pairwise forces or due to quorum-sensing interactions. We characterize both the instability mechanism leading to phase separation and the resulting phase coexistence. We probe how, in the stationary state, AOUPs depart from their thermal equilibrium limit by investigating the emergence of ratchet currents and entropy production. In the small persistence time limit, we show how fluctuation-dissipation relations are recovered. Finally, we discuss how the emerging properties of AOUPs can be characterized from the dynamics of their collective modes.'\nauthor:\n- David Martin\n- 'J\u00e9r\u00e9my O\u2019Byrne'\n- 'Michael E. Cates'\n- \u00c9tienne Fodor\n- Cesare Nardini\n- Julien Tailleur\n- Fr\u00e9d\u00e9ric van Wijland\nbibliography:\n- 'Biblio.bib'\ntitle: Statistical Mechanics of Active Ornstein Uhlenbeck" +"---\nauthor:\n- 'Carlos Bautista,$^{1,2}$'\n- 'Leonardo de Lima,$^3$'\n- 'Ricardo D\u2019Elia Matheus,$^1$'\n- 'Eduardo Pont\u00f3n,$^{1,2,}$[^1]'\n- 'Le\u00f4nidas A. Fernandes do Prado,$^{1,4}$ and'\n- 'Aurore Savoy-Navarro$^4$'\nbibliography:\n- 'ref.bib'\ntitle: 'Probing the Top-Higgs Sector with Composite Higgs Models at Present and Future Hadron Colliders'\n---\n\n[@counter>0@toks=@toks=]{}\n\n[@counter>0@toks=@toks=]{}\n\n[@counter>0@toks=@toks=]{}\n\n[@counter>0@toks=@toks=]{}\n\n[@counter>0@toks=@toks=]{}\n\n[ !! @toks= @toks= ]{}\n\n[ !! @toks= @toks= ]{}\n\n[ !! @toks= @toks= ]{}\n\n[ !! @toks= @toks= ]{}\n\n[abstract[We study the production of $t{\\overline t}h$ and $t{\\overline t}hh$ at hadron colliders, in the minimal Composite Higgs Models, based on the coset $SO(5)/SO(4)$. We explore the fermionic representations $\\bf 5$ and ${\\bf 14}$. A detailed phenomenological analysis is performed, covering the energy range of the LHC and its High Luminosity upgrade, as well as that of a future 100 TeV hadron collider. Both resonant and non-resonant production are considered, stressing the interplay and complementary interest of these channels with each other and double Higgs production. We provide sets of representative points with detailed experimental outcomes in terms of modification of the cross sections as well as resonance masses and branching ratios. For non-resonant production, we gauge the relative importance of Yukawa, Higgs trilinear, and contact $t\\bar{t}hh$ vertices to these" +"---\nabstract: 'A family of travelling wave solutions to the Fisher-KPP equation with speeds $c=\\pm 5/\\sqrt{6}$ can be expressed exactly using Weierstra$\\ss$ elliptic functions. The well-known solution for $c=5/\\sqrt{6}$, which decays to zero in the far-field, is exceptional in the sense that it can be written simply in terms of an exponential function. This solution has the property that the phase-plane trajectory is a heteroclinic orbit beginning at a saddle point and ends at the origin. For $c=-5/\\sqrt{6}$, there is also a trajectory that begins at the saddle point, but this solution is normally disregarded as being unphysical as it blows up for finite $z$. We reinterpret this special trajectory as an exact sharp-fronted travelling solution to a *Fisher-Stefan* type moving boundary problem, where the population is receding from, instead of advancing into, an empty space. By simulating the full moving boundary problem numerically, we demonstrate how time-dependent solutions evolve to this exact travelling solution for large time. The relevance of such receding travelling waves to mathematical models for cell migration and cell proliferation is also discussed.'\nauthor:\n- 'Scott\u00a0W. McCue'\n- 'Maud El-Hachem'\n- 'Matthew\u00a0J. Simpson[^1]'\ntitle: 'Exact sharp-fronted travelling wave solutions of the Fisher-KPP equation'\n---" +"---\nabstract: 'We consider the statistical properties of a non-falling trajectory in the Whitney problem of an inverted pendulum excited by an external force. In the case when the external force is white noise, we recently found the instantaneous distribution function of the pendulum angle and velocity over an infinite time interval using a transfer-matrix analysis of the supersymmetric field theory. Here, we generalize our approach to the case of finite time intervals and multipoint correlation functions. Using the developed formalism, we calculate the Lyapunov exponent, which determines the decay rate of correlations on a non-falling trajectory.'\nauthor:\n- 'Nikolai A.\u00a0Stepanov'\n- 'Mikhail A.\u00a0Skvortsov'\ndate: 'August 27, 2020'\ntitle: 'Lyapunov exponent for Whitney\u2019s problem with random drive'\n---\n\n**1.** Balancing an inverted pendulum under a given time-dependent horizontal force $f(t)$ is a famous mathematical problem formulated by Courant and Robbins in their book *What is Mathematics?* (first edition in 1941) [@CR-book], where Whitney was credited as the author of the problem. Using fairly general mathematical arguments based on the intermediate value theorem, they showed that for any force $f(t)$ acting during a finite time interval $[0,T]$, an initial position of the pendulum in the upper half-plane can be" +"---\nabstract: 'An off policy reinforcement learning based control strategy is developed for the optimal tracking control problem to achieve the prescribed performance of full states during the learning process. The optimal tracking control problem is converted as an optimal regulation problem based on an auxiliary system. The requirements of prescribed performances are transformed into constraint satisfaction problems that are dealt by risk sensitive state penalty terms under an optimization framework. To get approximated solutions of the Hamilton Jacobi Bellman equation, an off policy adaptive critic learning architecture is developed by using current data and experience data together. By using experience data, the proposed weight estimation update law of the critic learning agent guarantees weight convergence to the actual value. This technique enjoys practicability comparing with common methods that need to incorporate external signals to satisfy the persistence of excitation condition for weight convergence. The proofs of stability and weight convergence of the closed loop system are provided. Simulation results reveal the validity of the proposed off policy risk sensitive reinforcement learning based control strategy.'\nauthor:\n- 'Cong Li,\u00a0 Yongchao Wang, \u00a0 Fangzhou Liu[$^*$]{}, \u00a0 and\u00a0Martin Buss[^1]'\nbibliography:\n- 'bibtex/bib/IEEEexample.bib'\ntitle: Off Policy Risk Sensitive Reinforcement Learning Based Optimal Tracking Control" +"---\nabstract: |\n \\[sec:abstract\\] The coronavirus outbreak became a major concern for society worldwide. Technological innovation and ingenuity are essential to fight COVID-19 pandemic and bring us one step closer to overcome it.\n\n Researchers over the world are working actively to find available alternatives in different fields, such as the Healthcare System, pharmaceutic, health prevention, among others.With the rise of artificial intelligence (AI) in the last 10 years, IA-based applications have become the prevalent solution in different areas because of its higher capability, being now adopted to help combat against COVID-19.This work provides a fast detection system for COVID-19 using X-Ray images based on deep learning (DL) techniques. This system is available as a free web deployed service for fast patient classification, alleviating the high demand for standards method for COVID-19 diagnosis. It is constituted of two deep learning models, one to differentiate between X-Ray and non-X-Ray images based on Mobile-Net architecture, and another one to identify chest X-Ray images with characteristics of COVID-19 based on the DenseNet architecture.For real-time inference, it is provided a pair of dedicated GPUs, which reduce the computational time. The whole system can filter out non-chest X-Ray images, and detect whether the X-Ray presents characteristics" +"---\nabstract: 'Recently published [@Whaley_Baldwin] Density Functional Theory results using the PBE functional suggest that elemental sulfur does not adopt the simple-cubic (SC) $Pm\\bar{3}m$ phase at high pressures, in disagreement with previous works [@Rudin; @USPEX]. We carry out an extensive set of calculations using a variety of different functionals, pseudopotentials and the all-electron code `ELK`, and we are now able to show that even though under LDA and PW91 a high-pressure simple-cubic phase does indeed become favourable at the static lattice level, when zero-point energies (ZPEs) are included, the transition to the simple-cubic phase is suppressed in every case, owing to the larger ZPE of the SC phase. We reproduce these findings with pseudopotentials that explicitly include deep core and semicore states, and show that even at these high pressures, only the $n=3$ valence shell contributes to bonding in sulfur. We further show that the $Pm\\bar{3}m$ phase becomes even more unfavourable at finite temperatures. We finally investigate whether anharmonic corrections to the zero-point energies could make the $Pm\\bar{3}m$ phase favourable, and find that these corrections are several orders of magnitude smaller than the ZPEs and are thus negligable. These results therefore confirm the original findings of [@Whaley_Baldwin]; that the high" +"---\nabstract: 'To run an algorithm on a quantum computer, one must choose an assignment from logical qubits in a circuit to physical qubits on quantum hardware. This task of initial qubit placement, or *qubit allocation*, is especially important on present-day quantum computers which have a limited number of qubits, connectivity constraints, and varying gate fidelities. In this work we formulate and implement the qubit placement problem as a quadratic, unconstrained binary optimization (QUBO) problem and solve it using simulated annealing to obtain a spectrum of initial placements. Compared to contemporary allocation methods available in [t$\\mid$ket$\\rangle$ ]{}and Qiskit, the QUBO method yields allocations with improved circuit depth for $>$50% of a large set of benchmark circuits, with many also requiring fewer CX gates.'\nauthor:\n- Bryan Dury\n- Olivia Di Matteo\nbibliography:\n- 'main.bib'\ntitle: A QUBO formulation for qubit allocation\n---\n\nIntroduction\n============\n\nThe past decade has seen significant development in quantum computing hardware, with a number of commercially-available machines and software libraries that enable users to program and execute their own quantum algorithms. While architectures and implementations vary, common issues with present-day machines are the limited qubit connectivity and high error rates, especially for two-qubit operations.\n\nA crucial" +"---\nabstract: |\n The observed X-ray pulse period of OB-type high-mass X-ray binary (HMXB) pulsars are typically longer than 100 seconds. It is considered that the interaction between the strong magnetic field of neutron star and the wind matter could cause such a long pulse period.\n\n In this study, we follow the spin evolution of NS, taking into account the interaction between the magnetic field and wind matter. In this line, as new challenges, we solve the evolution of the magnetic field of the neutron star at the same time, and additionally we focus on the effects of wind properties of the donor. As the result, evolutionary tracks were obtained in which the neutron star spends some duration in the ejector phase after birth, then rapidly spins down, becomes quasi-equilibrium, and gradually spins up. Such evolution is similar to previous studies, but we found that its dominant physics depends on the velocity of the donor wind. When the wind velocity is fast, the spin-down occurs due to magnetic inhibition, while the classical propeller effect and settling accretion shell causes rapid spin-down in the slow wind accretion. Since the wind velocity of the donor could depend on the irradiated X-ray luminosity," +"---\nabstract: 'LiDAR-driven 3D sensing allows new generations of vehicles to achieve advanced levels of situation awareness. However, recent works have demonstrated that physical adversaries can spoof LiDAR return signals and deceive 3D object detectors to erroneously detect \u201cghost\" objects. Existing defenses are either impractical or focus only on vehicles. Unfortunately, it is easier to spoof smaller objects such as pedestrians and cyclists, but harder to defend against and can have worse safety implications. To address this gap, we introduce [Shadow-Catcher]{}, a set of new techniques embodied in an end-to-end prototype to detect both large and small ghost object attacks on 3D detectors. We characterize a new semantically meaningful physical invariant (3D shadows) which [Shadow-Catcher]{} leverages for validating objects. Our evaluation on the KITTI dataset shows that [Shadow-Catcher]{} consistently achieves more than 94% accuracy in identifying anomalous shadows for vehicles, pedestrians, and cyclists, while it remains robust to a novel class of strong \u201cinvalidation\u201d attacks targeting the defense system. [Shadow-Catcher]{} can achieve real-time detection, requiring only between 0.003s\u20130.021s on average to process an object in a 3D point cloud on commodity hardware and achieves a 2.17x speedup compared to prior work.'\nauthor:\n- Zhongyuan Hau\n- Soteris Demetriou\n- 'Luis" +"---\nabstract: 'Social hierarchy is important that can not be ignored in human socioeconomic activities and in the animal world. Here we incorporate this factor into the evolutionary game to see what impact it could have on the cooperation outcome. The probabilistic strategy adoption between two players is then not only determined by their payoffs, but also by their hierarchy difference \u2014 players in the high rank are more likely to reproduce their strategies than the peers in the low rank. Through simulating the evolution of Prisoners\u2019 dilemma game with three hierarchical distributions, we find that the levels of cooperation are enhanced in all cases, and the enhancement is optimal in the uniform case. The enhancement is due to the fact that the presence of hierarchy facilitates the formation of cooperation clusters with high-rank players acting as the nucleation cores. This mechanism remains valid on Barab\u00e1si-Albert scale-free networks, in particular the cooperation enhancement is maximal when the hubs are of higher social ranks. We also study a two-hierarchy model, where similar cooperation promotion is revealed and some theoretical analyses are provided. Our finding may partially explain why the social hierarchy is so ubiquitous on this planet.'\naddress:\n- 'School of" +"---\nabstract: 'Coordinate-transformation-inspired optical devices have been mostly examined in the continuous-wave regime: the performance of an invisibility cloak, which has been demonstrated for monochromatic excitation, is likely to deteriorate for short pulses. Here we investigate pulse dynamics of flexural waves propagating in transformed plates. We propose a practical realization of a waveshifter and a rotator for flexural waves based on the coordinate transformation method. Time-resolved measurements reveal how the waveshifter deviates a short pulse from its initial trajectory, with no reflection at the bend and no spatial and temporal distortion of the pulse. Extending our strategy to cylindrical coordinates, we design a wave rotator. We demonstrate experimentally how a pulsed plane wave is twisted inside the rotator, while its wavefront is recovered behind the rotator and the pulse shape is preserved, with no extra time delay. We propose the realization of the dynamical mirage effect, where an obstacle appears oriented in a deceptive direction.'\nauthor:\n- Kun Tang\n- Chenni Xu\n- S\u00e9bastien Guenneau\n- 'Patrick Sebbah\\*'\nbibliography:\n- 'mybibligraphy.bib'\ntitle: Pulse dynamics of flexural waves in transformed plates\n---\n\nDr. Kun Tang, Dr. Chenni Xu, Prof. Patrick Sebbah\\\nDepartment of Physics, The Jack and Pearl Resnick Institute for" +"---\nabstract: |\n We investigate opinion dynamics in multi-agent networks when a bias toward one of two possible opinions exists; for example, reflecting a status quo vs a superior alternative.\n\n Starting with all agents sharing an initial opinion representing the status quo, the system evolves in steps. In each step, one agent selected uniformly at random adopts the superior opinion with some probability $\\alpha$, and with probability $1 - \\alpha$ it follows an underlying update rule to revise its opinion on the basis of those held by its neighbors. We analyze convergence of the resulting process under two well-known update rules, namely *majority* and *voter*.\n\n The framework we propose exhibits a rich structure, with a non-obvious interplay between topology and underlying update rule. For example, for the voter rule we show that the speed of convergence bears no significant dependence on the underlying topology, whereas the picture changes completely under the majority rule, where network density negatively affects convergence.\n\n We believe that the model we propose is at the same time simple, rich, and modular, affording mathematical characterization of the interplay between bias, underlying opinion dynamics, and social structure in a unified setting.\nauthor:\n- |\n Aris Anagnostopoulos\\\n [Sapienza Universit\u00e0" +"---\nabstract: |\n An automaton is synchronizing if there is a word whose action maps all states onto the same state. [\u010cern\u00fd\u2019s ]{}conjecture on the length of the shortest such words is one of the most famous open problems in automata theory. We consider the closely related question of determining the minimum length of a word that maps some $k$ states onto a single state.\n\n For synchronizing automata, we find a simple argument for general $k$ almost halving the upper bound on the minimum length of a word sending $k$ states to a single state. We further improve the upper bound on the minimum length of a word sending $4$ states to a singleton from $0.5n^2$ to $\\approx 0.459n^2$, and the minimum length sending $5$ states to a singleton from $n^2$ to $\\approx 0.798n^2$. In contrast to previous work on triples, our methods are combinatorial. Indeed, we exhibit a fundamental obstacle which suggests that the previously used linear algebraic approach cannot extend to sets of more than 3 states.\n\n In the case of non-synchronizing automata, we give an example to show that the minimum length of a word that sends some $k$ states to a single state can be as" +"---\nabstract: 'Understanding the SARS-CoV-2 dynamics has been subject of intense research in the last months. In particular, accurate modeling of lockdown effects on epidemic evolution is a key issue in order e.g. to inform health-care decisions on emergency management. In this regard, the compartmental and spatial models so far proposed use parametric descriptions of the contact rate, often assuming a time-invariant effect of the lockdown. In this paper we show that these assumptions may lead to erroneous evaluations on the ongoing pandemic. Thus, we develop a new class of nonparametric compartmental models able to describe how the impact of the lockdown varies in time. Our estimation strategy does not require significant Bayes prior information and exploits regularization theory. Hospitalized data are mapped into an infinite-dimensional space, hence obtaining a function which takes into account also how social distancing measures and people\u2019s growing awareness of infection\u2019s risk evolves as time progresses. This also permits to reconstruct a continuous-time profile of SARS-CoV-2 reproduction number with a resolution never reached before in the literature. When applied to data collected in Lombardy, the most affected Italian region, our model illustrates how people behaviour changed during the restrictions and its importance to contain the" +"---\nabstract: 'Recent advances of end-to-end models have outperformed conventional models through employing a two-pass model. The two-pass model provides better speed-quality trade-offs for on-device speech recognition, where a $1st$-pass model generates hypotheses in a streaming fashion, and a $2nd$-pass model re-scores the hypotheses with full audio sequence context. The $2nd$-pass model plays a key role in the quality improvement of the end-to-end model to surpass the conventional model. One main challenge of the two-pass model is the computation latency introduced by the $2nd$-pass model. Specifically, the original design of the two-pass model uses LSTMs for the $2nd$-pass model, which are subject to long latency as they are constrained by the recurrent nature and have to run inference sequentially. In this work we explore replacing the LSTM layers in the $2nd$-pass rescorer with Transformer layers, which can process the entire hypothesis sequences *in parallel* and can therefore utilize the on-device computation resources more efficiently. Compared with an LSTM-based baseline, our proposed Transformer rescorer achieves more than $50\\%$ latency reduction with quality improvement.'\naddress: 'Google Inc., USA'\nbibliography:\n- 'mybib.bib'\ntitle: 'Parallel Rescoring with Transformer for Streaming On-Device Speech Recognition'\n---\n\n**Index Terms**: Streaming speech recognition, Transformer, Latency, Rescoring\n\nIntroduction\n============" +"---\nabstract: 'We consider the notion of cosmological symmetry, i.e., spatial homogeneity and isotropy, in the field of teleparallel gravity and geometry, and provide a complete classification of all homogeneous and isotropic teleparallel geometries. We explicitly construct these geometries by independently employing three different methods, and prove that all of them lead to the same class of geometries. Further, we derive their properties, such as the torsion tensor and its irreducible decomposition, as well as the transformation behavior under change of the time coordinate, and derive the most general cosmological field equations for a number of teleparallel gravity theories. In addition to homogeneity and isotropy, we extend the notion of cosmological symmetry to also include spatial reflections, and find that this further restricts the possible teleparallel geometries. This work answers an important question in teleparallel cosmology, in which so far only particular examples of cosmologically symmetric solutions had been known, but it was unknown whether further solutions can be constructed.'\naddress: 'Laboratory of Theoretical Physics, Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411 Tartu, Estonia\\'\nauthor:\n- Manuel Hohmann\ntitle: Complete classification of cosmological teleparallel geometries\n---\n\nIntroduction {#sec:intro}\n============\n\nSome of the most prominent open questions in" +"---\nabstract: 'We show an extension of Sanov\u2019s theorem on large deviations, controlling the tail probabilities of i.i.d. random variables with matching concentration and anti-concentration bounds. This result has a general scope, applies to samples of any size, and has a short information-theoretic proof using elementary techniques.'\nauthor:\n- 'Akshay Balsubramani akshay7@gmail.com\\'\nbibliography:\n- 'sample.bib'\ntitle: 'Sharp finite-sample concentration of independent variables'\n---\n\nIndependently and identically distributed (i.i.d.) data are drawn from a distribution $P$. A central focus in statistics, machine learning, and probability is what can be gleaned about $P$ from a sample of these data \u2013 how much data must be sampled for the empirical distribution of the data to concentrate near $P$?\n\nSetup\n=====\n\nWe describe distributions $P$ and $Q$ over a measurable space ${\\mathcal{X}}$ using the quantities of information theory. The entropy of a distribution $P$ is ${\\textsc{H}}(P) := {\\mathbb{E}_{x \\sim P} \\left[\\ln \\frac{1}{P(x)}\\right]}$. The relative entropy of $Q$ with respect to $P$ is ${\\textsc{D} }(Q \\mid\\mid P) := {\\mathbb{E}_{x \\sim Q} \\left[\\ln \\frac{Q(x)}{P(x)}\\right]}$. The cross entropy of $P$ with respect to $Q$ is ${\\textsc{H}}(Q , P) := {\\mathbb{E}_{x \\sim Q} \\left[\\ln \\frac{1}{P(x)}\\right]}$. The empirical measure of any sample $Z = (z_1, \\dots, z_n) \\in {\\mathcal{X}}^{n}$" +"---\nabstract: 'Discrete event sequences are ubiquitous, such as an ordered event series of process interactions in Information and Communication Technology systems. Recent years have witnessed increasing efforts in detecting anomalies with discrete event sequences. However, it still remains an extremely difficult task due to several intrinsic challenges including data imbalance issue, discrete property of the events, and sequential nature of the data. To address these challenges, in this paper, we propose **OC4Seq**, a multi-scale one-class recurrent neural network for detecting anomalies in discrete event sequences. Specifically, **OC4Seq** integrates the anomaly detection objective with recurrent neural networks (RNNs) to embed the discrete event sequences into latent spaces, where anomalies can be easily detected. In addition, given that an anomalous sequence could be caused by either individual events, subsequences of events, or the whole sequence, we design a multi-scale RNN framework to capture different levels of sequential patterns simultaneously. Experimental results on three benchmark datasets show that **OC4Seq** consistently outperforms various representative baselines by a large margin. Moreover, through both quantitative and qualitative analysis, the importance of capturing multi-scale sequential patterns for event anomaly detection is verified.'\nauthor:\n- Zhiwei Wang\n- Zhengzhang Chen\n- Jingchao Ni\n- Hui Liu\n-" +"---\nabstract: 'The mechanisms of infant development are far from understood. Learning about one\u2019s own body is likely a foundation for subsequent development. Here we look specifically at the problem of how spontaneous touches to the body in early infancy may give rise to first body models and bootstrap further development such as reaching competence. Unlike visually elicited reaching, reaching to own body requires connections of the tactile and motor space only, bypassing vision. Still, the problems of high dimensionality and redundancy of the motor system persist. In this work, we present an embodied computational model on a simulated humanoid robot with artificial sensitive skin on large areas of its body. The robot should autonomously develop the capacity to reach for every tactile sensor on its body. To do this efficiently, we employ the computational framework of intrinsic motivations and variants of goal babbling\u2014as opposed to motor babbling\u2014that prove to make the exploration process faster and alleviate the ill-posedness of learning inverse kinematics. Based on our results, we discuss the next steps in relation to infant studies: what information will be necessary to further ground this computational model in behavioral data.'\nauthor:\n- \nbibliography:\n- './bibliography/ICDL2020\\_NaoSelfExploration.bib'\ntitle: 'Active exploration for" +"---\nabstract: 'The recent discovery of high-redshift\u00a0($z>6$) supermassive black holes\u00a0(SMBH) favors the formation of massive seed BHs in protogalaxies. One possible scenario is formation of massive stars $\\simeq 10^{3}\\mbox{-}10^{4}~{M_\\odot}$ via runaway stellar collisions in a dense cluster, leaving behind massive BHs without significant mass loss. We study the pulsational instability of massive stars with the zero-age main-sequence\u00a0(ZAMS) mass $M_{\\rm ZAMS}/{M_\\odot}= 300\\mbox{-}3000$ and metallicity $Z/{Z_\\odot}= 0\\mbox{-}10^{-1}$, and discuss whether or not pulsation-driven mass loss prevents massive BH formation. In the MS phase, the pulsational instability excited by the $\\epsilon$-mechanism grows in $\\sim 10^3\\ {\\rm yrs}$. As the stellar mass and metallicity increase, the mass-loss rate increases to $\\lesssim 10^{-3}\\ {M_\\odot}\\ {\\rm yr}^{-1}$. In the red super-giant\u00a0(RSG) phase, the instability is excited by the $\\kappa$-mechanism operating in the hydrogen ionization zone and grows more rapidly in $\\sim 10\\ {\\rm yrs}$. The RSG mass-loss rate is almost independent of metallicity and distributes in the range of $\\sim 10^{-3}\\mbox{-}10^{-2}\\ {M_\\odot}\\ {\\rm yr}^{-1}$. Conducting the stellar structure calculations including feedback due to pulsation-driven winds, we find that the stellar models of $M_{\\rm ZAMS}/{M_\\odot}= 300\\mbox{-}3000$ can leave behind remnant BHs more massive than $\\sim 200\\mbox{-}1200\\ {M_\\odot}$. We conclude that massive merger products" +"---\nabstract: 'There has been growing attention on how to effectively and objectively use covariate information when the primary goal is to estimate the average treatment effect (ATE) in randomized clinical trials (RCTs). In this paper, we propose an effective weighting approach to extract covariate information based on the empirical likelihood (EL) method. The resulting two-sample empirical likelihood weighted (ELW) estimator includes two classes of weights, which are obtained from a constrained empirical likelihood estimation procedure, where the covariate information is effectively incorporated into the form of general estimating equations. Furthermore, this ELW approach separates the estimation of ATE from the analysis of the covariate-outcome relationship, which implies that our approach maintains objectivity. In theory, we show that the proposed ELW estimator is semiparametric efficient. We extend our estimator to tackle the scenarios where the outcomes are missing at random (MAR), and prove the double robustness and multiple robustness properties of our estimator. Furthermore, we derive the semiparametric efficiency bound of all regular and asymptotically linear semiparametric ATE estimators under MAR mechanism and prove that our proposed estimator attains this bound. We conduct simulations to make comparisons with other existing estimators, which confirm the efficiency and multiple robustness property of" +"---\nauthor:\n- |\n Po-Ming Law [^1]\\\n Georgia Institute of Technology\n- |\n Alex Endert [^2]\\\n Georgia Institute of Technology\n- |\n John Stasko [^3]\\\n Georgia Institute of Technology\nbibliography:\n- 'template.bib'\ntitle: 'What are Data Insights to Professional Visualization Users?'\n---\n\nThe visualization community has recognized insight as a core purpose of visualizations\u00a0[@purpose]. While developing technologies that facilitate the process of gaining data insights, many researchers have articulated multiple definitions of insight. North\u00a0[@north] conceptualizes insights as complex, deep, qualitative, unexpected, and relevant revelations. Besides considering insight knowledge or information, Chang et al.\u00a0[@chang] believe that an insight can also be regarded as a moment of enlightenment. Despite the efforts to define data insights, little is known about how visualization users perceive them.\n\nWhy care about visualization users\u2019 perceptions of data insights? Understanding their perceptions could offer implications for designing tools that automatically generate data insights. Some researchers have envisioned automated systems (Fig.\u00a0\\[powerBI\\]) that communicate data insights with similar qualities to those users glean through construction, manipulation, and interpretation of visualizations\u00a0[@newPaper]. These systems can accelerate knowledge discovery from data and lower the barrier to analysis for non-expert analysts. To create tools for automating data findings that" +"---\nabstract: 'This paper studies an $N$-coalition non-cooperative game problem, where the players in the same coalition cooperatively minimize the sum of their local cost functions under a directed communication graph, while collectively acting as a virtual player to play a non-cooperative game with other coalitions. Moreover, it is assumed that the players have no access to the explicit functional form but only the function value of their local costs. To solve the problem, a discrete-time gradient-free Nash equilibrium seeking strategy, based on the gradient tracking method, is proposed. Specifically, a gradient estimator is developed locally based on Gaussian smoothing to estimate the partial gradients, and a gradient tracker is constructed locally to trace the average sum of the partial gradients among the players within the coalition. With a sufficiently small constant step-size, we show that all players\u2019 actions approximately converge to the Nash equilibrium at a geometric rate under a strongly monotone game mapping condition. Numerical simulations are conducted to verify the effectiveness of the proposed algorithm.'\nauthor:\n- 'Yipeng Pang and Guoqiang Hu[^1][^2]'\nbibliography:\n- 'd\\_ne\\_coalition\\_rgf\\_reference.bib'\ntitle: '**Nash Equilibrium Seeking in $N$-Coalition Games via a Gradient-Free Method** '\n---\n\nNash equilibrium seeking, gradient-free methods, non-cooperative games.\n\nIntroduction\n============" +"---\nabstract: 'Intermediate-mass black holes (IMBHs) could form via runaway merging of massive stars in a young massive star cluster (YMC). We combine a suite of numerical simulations of YMC formation with a semi-analytic model for dynamical friction and merging of massive stars and evolution of a central quasi-star, to predict how final quasi-star and relic IMBH masses scale with cluster properties (and compare with observations). The simulations argue that inner YMC density profiles at formation are steep (approaching isothermal), producing some efficient merging even in clusters with relatively low effective densities, unlike models which assume flat central profiles resembling those of globular clusters (GCs) [*after*]{} central relaxation. Our results can be approximated by simple analytic scalings, with $M_{\\rm IMBH} \\propto v_{\\rm cl}^{3/2}$ where $v_{\\rm cl}^{2} = G\\,M_{\\rm cl}/r_{\\rm h}$ is the circular velocity in terms of initial cluster mass $M_{\\rm cl}$ and half-mass radius $r_{\\rm h}$. While this suggests IMBH formation is [*possible*]{} even in typical clusters, we show that predicted IMBH masses for these systems are small, $\\sim 100-1000\\,M_{\\odot}$ or $\\sim 0.0003\\,M_{\\rm cl}$, below even the most conservative observational upper limits in all known cases. The IMBH mass could reach $\\gtrsim 10^{4}\\,M_{\\odot}$ in the centers nuclear star clusters," +"---\nauthor:\n- Yale Fan\nbibliography:\n- 'seifert.bib'\ntitle: '3D-3D Correspondence from Seifert Fibering Operators'\n---\n\n[ !! @toks= @toks= ]{}\n\n[abstract[Using recently developed Seifert fibering operators for 3D $\\mathcal{N} = 2$ gauge theories, we formulate the necessary ingredients for a state-integral model of the topological quantum field theory dual to a given Seifert manifold under the 3D-3D correspondence, focusing on the case of Seifert homology spheres with positive orbifold Euler characteristic. We further exhibit a set of difference operators that annihilate the wavefunctions of this TQFT on hyperbolic three-manifolds, generalizing similar constructions for lens space partition functions and holomorphic blocks. These properties offer intriguing clues as to the structure of the underlying TQFT.]{}]{}\n\nIntroduction\n============\n\nA broad goal of the supersymmetric localization program is to exploit the locality of quantum field theory to find fundamental building blocks of supersymmetric partition functions and observables. In this regard, a powerful point of view is that line operators in the field theory can be used to modify the background geometry on which it resides. Such an idea traces back at least to work of Blau and Thompson [@Blau:1993tv; @Blau:2006gh] on Chern-Simons theory, but has recently been shown to generalize to arbitrary three-dimensional" +"---\nabstract: 'We present an integrated approach to analyse the multi-lead ECG data using the frame work of multiplex recurrence networks (MRNs). We explore how their intralayer and interlayer topological features can capture the subtle variations in the recurrence patterns of the underlying spatio-temporal dynamics. We find MRNs from ECG data of healthy cases are significantly more coherent with high mutual information and less divergence between respective degree distributions. In cases of diseases, significant differences in specific measures of similarity between layers are seen. The coherence is affected most in the cases of diseases associated with localized abnormality such as bundle branch block. We note that it is important to do a comprehensive analysis using all the measures to arrive at disease-specific patterns. Our approach is very general and as such can be applied in any other domain where multivariate or multi-channel data are available from highly complex systems.'\nauthor:\n- Sneha Kachhara\n- 'G. Ambika'\ntitle: 'Multiplex Recurrence Networks from multi-lead ECG data'\n---\n\n> The Electrocardiogram (ECG) is a record of the electrical activity of the heart in the form of a time series. The study of cardiac dynamics through ECG has gathered a lot of attention in" +"---\nabstract: 'We have shown that the thermal emission of the amorphous dust composed of amorphous silicate dust (a-Si) and amorphous carbon dust (a-C) provides excellent fit both to the observed intensity and the polarization spectra of molecular clouds. The anomalous microwave emission (AME) originates from the resonance transition of the two-level systems (TLS) attributed to the a-C with an almost spherical shape. On the other hand, the observed polarized emission in submillimeter wavebands is coming from a-Si. By taking into account a-C, the model prediction of the polarization fraction of the AME is reduced dramatically. Our model prediction of the 3$\\sigma$ lower limits of the polarization fraction of the Perseus and W43 molecular clouds at 17 GHz are $8.129\\times10^{-5}$ and $8.012\\times10^{-6}$, respectively. The temperature dependence of the heat capacity of a-C shows the peculiar behavior compared with that of a-Si. So far, the properties of a-C are unique to interstellar dust grains. Therefore, we coin our dust model as the cosmic amorphous dust model (CAD).'\nauthor:\n- Masashi Nashimoto\n- Makoto Hattori\n- 'Fr[\u00e9]{}d[\u00e9]{}rick Poidevin'\n- 'Ricardo G[\u00e9]{}nova-Santos'\nbibliography:\n- 'article.bib'\ntitle: Cosmic Amorphous Dust Model as the Origin of Anomalous Microwave Emission\n---\n\nIntroduction {#sec:intro}\n============\n\nPlenty of" +"---\nabstract: 'The breathing honeycomb lattice hosts a topologically non-trivial bulk phase due to the crystalline-symmetry of the system. Pseudospin-dependent edge states which emerge at the interface between trivial and non-trivial regions can be used for directional propagation of energy. Using the plasmonic metasurface as an example system, we probe these states in the near and far-field using a semi-analytical model. We give the conditions under which directionality is observed and show that it is source position dependent. By probing with circularly-polarised magnetic dipoles out of the plane, we first characterize modes along the interface in terms of the enhancement of source emission due to the metasurface. We then excite from the far-field with non-zero orbital angular momentum beams. The position dependent directionality holds true for all classical wave systems with a breathing honeycomb lattice. Our results show that a metasurface in combination with a chiral two-dimensional material could be used to guide light effectively on the nanoscale.'\nauthor:\n- Matthew Proctor\n- Xiaofei Xiao\n- 'Richard V. Craster'\n- 'Stefan A. Maier'\n- Vincenzo Giannini\n- Paloma Arroyo Huidobro\nbibliography:\n- 'main.bib'\ntitle: 'Near- and Far-Field Excitation of Topological Plasmonic Metasurfaces'\n---\n\nIntroduction\n============\n\nTopological nanophotonics offers a path" +"---\nabstract: 'We introduce a new method to continuously map inhomogeneities of a moir\u00e9 lattice and apply it to large-area topographic images we measure on open-device twisted bilayer graphene (TBG). We show that the variation in the twist angle of a TBG device, which is frequently conjectured to be the reason for differences between devices with a supposed similar twist angle, is about 0.08$^\\circ$ around the average of 2.02$^\\circ$ over areas of several hundred nm, comparable to devices encapsulated between hBN slabs. We distinguish between an effective twist angle and local anisotropy and relate the latter to heterostrain. Our results imply that for our devices, twist angle heterogeneity has a roughly equal effect to the electronic structure as local strain. The method introduced here is applicable to results from different imaging techniques, and on different moir\u00e9 materials.'\nauthor:\n- 'Tjerk Benschop$^{1*}$'\n- 'Tobias A. de Jong$^{1*}$'\n- 'Petr Stepanov$^{2*}$'\n- 'Xiaobo Lu$^{2}$'\n- 'Vincent Stalman$^{1}$'\n- 'Sense Jan van der Molen$^{1}$'\n- 'Dmitri K. Efetov$^{2}$'\n- 'Milan P. Allan$^{1}$'\ntitle: Measuring local moir\u00e9 lattice heterogeneity of twisted bilayer graphene\n---\n\nIntroduction\n============\n\nStacking two sheets of identical periodic lattices with a small twist angle $\\theta$ leads to a super-periodic lattice" +"---\nabstract: 'To send encrypted emails, users typically need to create and exchange keys which later should be manually authenticated, for instance, by comparing long strings of characters. These tasks are cumbersome for the average user. To make more accessible the use of encrypted email, a secure email application named [$p\\equiv p$]{}automates the key management operations; [$p\\equiv p$]{}still requires the users to carry out the verification, however, the authentication process is simple: users have to compare familiar words instead of strings of random characters, then the application shows the users what level of trust they have achieved via colored visual indicators. Yet, users may not execute the authentication ceremony as intended, [$p\\equiv p$]{}\u2019s trust rating may be wrongly assigned, or both. To learn whether [$p\\equiv p$]{}\u2019s trust ratings (and the corresponding visual indicators) are assigned consistently, we present a formal security analysis of [$p\\equiv p$]{}\u2019s authentication ceremony. From the software implementation in C, we derive the specifications of an abstract protocol for public key distribution, encryption and trust establishment; then, we model the protocol in a variant of the applied pi calculus and later formally verify and validate specific privacy and authentication properties. We also discuss alternative research directions that" +"---\nabstract: 'We formulate explicit predictions concerning the symmetry of optimal codes in compact metric spaces. This motivates the study of optimal codes in various spaces where these predictions can be tested.'\nauthor:\n- 'Christopher\u00a0Cox[^1] Emily\u00a0J.\u00a0King[^2] Dustin\u00a0G.\u00a0Mixon[^3] Hans\u00a0Parshall[^4]'\ntitle: Uniquely optimal codes of low complexity are symmetric\n---\n\nIntroduction\n============\n\nSolutions to geometric extremal problems often exhibit a notably high degree of symmetry. Fejes T\u00f3th observed this phenomenon in\u00a0[@Toth:64; @Toth:86], in which he elaborates on many examples, including the Tammes problem of arranging points on the sphere so that the minimum distance is maximized. For this problem, optimal configurations include the vertices of the tetrahedron, the octahedron, and the icosahedron\u00a0[@Toth:40]. By virtue of their striking symmetry, these Platonic solids were well understood by Euclid long before the Dutch botanist Tammes was inspired by the regular distribution of pores on spherical pollen grains, and yet they independently arise as solutions to a seemingly unrelated geometric extremal problem.\n\nThe recent literature offers numerous incarnations of this mysterious correspondence between optimality and symmetry. Cohn and Kumar\u00a0[@CohnK:07] showed that there are a handful of configurations in $S^{d-1}$ that simultaneously minimize an infinite class of natural" +"---\nabstract: 'The adversarial vulnerability of deep networks has spurred the interest of researchers worldwide. Unsurprisingly, like images, adversarial examples also translate to time-series data as they are an inherent weakness of the model itself rather than the modality. Several attempts have been made to defend against these adversarial attacks, particularly for the visual modality. In this paper, we perform detailed benchmarking of well-proven adversarial defense methodologies on time-series data. We restrict ourselves to the $L_{\\infty}$ threat model. We also explore the trade-off between smoothness and clean accuracy for regularization-based defenses to better understand the trade-offs that they offer. Our analysis shows that the explored adversarial defenses offer robustness against both strong white-box as well as black-box attacks. This paves the way for future research in the direction of adversarial attacks and defenses, particularly for time-series data.'\nauthor:\n- Shoaib Ahmed Siddiqui\n- Andreas Dengel\n- Sheraz Ahmed\nbibliography:\n- 'ecml.bib'\ntitle: 'Benchmarking adversarial attacks and defenses for time-series data'\n---\n\nIntroduction\n============\n\nTime-series data is ubiquitous in this era of internet-of-things (IoT) and industry 4.0 where millions of sensors are generating data at an extremely high frequency\u00a0[@siddiqui2019tsviz; @fawaz2019adversarial; @karim2020adversarial; @harford2020adversarial]. With this increasing amount of data, there has" +"---\nauthor:\n- 'Chiel C. van Heerwaarden `(chiel.vanheerwaarden@wur.nl)`'\n- 'Wouter B. Mol'\n- 'Menno A. Veerman'\n- 'Imme B. Benedict'\n- 'Bert G. Heusinkveld'\n- 'Wouter H. Knap'\n- Stelios Kazadzis\n- Natalia Kouremeti\n- Stephanie Fiedler\ndate: 'January 21, 2021'\ntitle:\n- 'Record high solar irradiance in Western Europe during first COVID-19 lockdown largely due to unusual weather'\n- '**Supplementary material:** Record high solar irradiance in Western Europe during first COVID-19 lockdown largely due to unusual weather'\n---\n\nAbstract\n========\n\nSpring 2020 broke sunshine duration records across Western Europe. The Netherlands recorded the highest surface irradiance since 1928, exceeding the previous extreme of 2011 by 13%, and the diffuse fraction of the irradiance measured a record low percentage (38%). The coinciding irradiance extreme and a reduction in anthropogenic pollution due to COVID-19 measures triggered the hypothesis that cleaner-than-usual air contributed to the record. Based on analyses of ground-based and satellite observations and experiments with a radiative transfer model, we estimate a 1.3% (2.3 W m$^{-2}$) increase in surface irradiance with respect to the 2010-2019 mean due to a low median aerosol optical depth, and a 17.6% (30.7 W m$^{-2}$) increase due to several exceptionally dry days and a very" +"---\nabstract: 'Researcher Bias (RB) occurs when researchers influence the results of an empirical study based on their expectations. RB might be due to the use of Questionable Research Practices (QRPs). In research fields like medicine, blinding techniques have been applied to counteract RB. We conducted an explorative qualitative survey to investigate RB in Software Engineering (SE) experiments, with respect to: *(i)*\u00a0QRPs potentially leading to RB, *(ii)*\u00a0causes behind RB, and *(iii)*\u00a0possible actions to counteract RB including blinding techniques. Data collection was based on semi-structured interviews. We interviewed nine active experts in the empirical SE community. We then analyzed the transcripts of these interviews through thematic analysis. We found that some QRPs are acceptable in certain cases. Also, it appears that the presence of RB is perceived in SE and, to counteract RB, a number of solutions have been highlighted: some are intended for SE researchers and others for the boards of SE research outlets.'\nauthor:\n- \n- \n- \n- \n- \n- \n- \nbibliography:\n- 'IEEEabrv.bib'\n- 'bibliography.bib'\ntitle: 'Researcher Bias in Software Engineering Experiments: a Qualitative Investigation'\n---\n\nSurvey, interview, researcher bias, blinding.\n\nIntroduction\n============\n\nIn research, *bias* is defined as the combination of various design, data," +"---\nabstract: |\n We present cosmological hydrodynamic simulations of a quasar-mass halo ($M_{\\rm halo} \\approx 10^{12.5}\\,{\\rm M}_{\\odot}~{\\rm at}~z=2$) that for the first time resolve gas transport down to the inner 0.1pc surrounding the central massive black hole. We model a multi-phase interstellar medium including stellar feedback by supernovae, stellar winds, and radiation, and a hyper-Lagrangian refinement technique increasing the resolution dynamically approaching the black hole. We do not include black hole feedback. We show that the sub-pc inflow rate (1) can reach $\\sim$6[M$_{\\odot}$yr$^{-1}$]{}\u00a0roughly in steady state during the epoch of peak nuclear gas density ($z\\sim 2$), sufficient to power a luminous quasar, (2) is highly time variable in the pre-quasar phase, spanning 0.001\u201310[M$_{\\odot}$yr$^{-1}$]{}\u00a0on Myr timescales, and (3) is limited to short ($\\sim$2Myr) active phases (0.01\u20130.1[M$_{\\odot}$yr$^{-1}$]{}) followed by longer periods of inactivity at lower nuclear gas density and late times ($z\\sim1$), owing to the formation of a hot central cavity. Inflowing gas is primarily cool, rotational support dominates over turbulence and thermal pressure, and star formation can consume as much gas as provided by inflows across 1pc\u201310kpc. Gravitational torques from multi-scale stellar non-axisymmetries dominate angular momentum transport over gas self-torquing and pressure gradients, with accretion weakly dependent on black" +"---\nabstract: 'We report numerical calculations of a dynamic pairbreaking current density $J_d$ and a critical superfluid velocity $v_d$ in a nonequilibrium superconductor carrying a uniform, large-amplitude ac current density $J(t)=J_a\\sin\\Omega t$ with $\\Omega$ well below the gap frequency $\\Omega\\ll \\Delta_0/\\hbar$. The dependencies $J_d(\\Omega,T)$ and $v_d(\\Omega,T)$ near the critical temperature $T_c$ were calculated from either the full time-dependent nonequilibrium equations for a dirty s-wave superconductor and the time-dependent Ginzburg-Landau (TDGL) equations for a gapped superconductor, taking into account the GL relaxation time of the order parameter $\\tau_{GL}$ and the inelastic electron-phonon relaxation time of quasiparticles $\\tau_E$. We show that both approaches give similar frequency dependencies of $J_d(\\Omega)$ and $v_d(\\Omega)$ which gradually increase from their static pairbreaking GL values $J_c$ and $v_c$ at $\\Omega\\tau_E\\ll 1$ to $\\sqrt{2}J_c$ and $\\sqrt{2}v_c$ at $\\Omega\\tau_E\\gg 1$. Here $J_d$, $v_d$ and a dynamic superheating field at which the Meissner state becomes unstable were calculated in two different regimes of a fixed ac current and a fixed ac superfluid velocity induced by the applied ac magnetic field $H=H_a\\sin\\Omega t$ in a thin superconducting filament or a type-II superconductor with a large GL parameter. We also calculated a nonlinear electromagnetic response of a nonequilibrium superconducting state, particularly a" +"---\nabstract: 'This paper introduces a novel neural network-based speech coding system that can process noisy speech effectively. The proposed source-aware neural audio coding (SANAC) system harmonizes a deep autoencoder-based source separation model and a neural coding system, so that it can explicitly perform source separation and coding in the latent space. An added benefit of this system is that the codec can allocate a different amount of bits to the underlying sources, so that the more important source sounds better in the decoded signal. We target a new use case where the user on the receiver side cares about the quality of the non-speech components in the speech communication, while the speech source still carries the most important information. Both objective and subjective evaluation tests show that SANAC can recover the original noisy speech better than the baseline neural audio coding system, which is with no source-aware coding mechanism, and two conventional codecs.'\naddress: |\n $^1$Indiana University, Department of Intelligent Systems Engineering, Bloomington, IN, USA\\\n $^2$Electronics and Telecommunications Research Institute, Daejeon, South Korea\nbibliography:\n- 'new.bib'\ntitle: 'Source-Aware Neural Speech Coding for Noisy Speech Compression'\n---\n\nSpeech enhancement, speech coding, source separation\n\nIntroduction\n============\n\nBreakthroughs made in deep learning" +"---\nabstract: 'The ability to robustly and efficiently control the dynamics of nonlinear systems lies at the heart of many current technological challenges, ranging from drug delivery systems to ensuring flight safety. Most such scenarios are too complex to tackle directly and reduced-order modelling is used in order to create viable representations of the target systems. The simplified setting allows for the development of rigorous control theoretical approaches, but the propagation of their effects back up the hierarchy and into real-world systems remains a significant challenge. Using the canonical setup of a liquid film falling down an inclined plane under the action of active feedback controls in the form of blowing and suction, we develop a multi-level modelling framework containing both analytical models and direct numerical simulations acting as an in silico experimental platform. Constructing strategies at the inexpensive lower levels in the hierarchy, we find that offline control transfer is not viable, however analytically-informed feedback strategies show excellent potential, even far beyond the anticipated range of applicability of the models. The detailed effects of the controls in terms of stability and treatment of nonlinearity are examined in detail in order to gain understanding of the information transfer inside the" +"---\nabstract: 'The detection of manufacturing errors is crucial in fabrication processes to ensure product quality and safety standards. Since many defects occur very rarely and their characteristics are mostly unknown a priori, their detection is still an open research question. To this end, we propose [DifferNet]{.nodecor}: It leverages the descriptiveness of features extracted by convolutional neural networks to estimate their density using normalizing flows. Normalizing flows are well-suited to deal with low dimensional data distributions. However, they struggle with the high dimensionality of images. Therefore, we employ a multi-scale feature extractor which enables the normalizing flow to assign meaningful likelihoods to the images. Based on these likelihoods we develop a scoring function that indicates defects. Moreover, propagating the score back to the image enables pixel-wise localization. To achieve a high robustness and performance we exploit multiple transformations in training and evaluation. In contrast to most other methods, ours does not require a large number of training samples and performs well with as low as 16 images. We demonstrate the superior performance over existing approaches on the challenging and newly proposed MVTec AD [@mvtec] and Magnetic Tile Defects [@magnets] datasets.'\nauthor:\n- |\n Marco Rudolph Bastian Wandt Bodo Rosenhahn\\\n Leibniz" +"---\nabstract: |\n We present new methods for solving the Satisfiability Modulo Theories problem over the theory of Quantifier-Free Non-linear Integer Arithmetic, SMT(QF-NIA), which consists in deciding the satisfiability of ground formulas with integer polynomial constraints. Following previous work, we propose to solve SMT(QF-NIA) instances by reducing them to linear arithmetic: non-linear monomials are linearized by abstracting them with fresh variables and by performing case splitting on integer variables with finite domain. For variables that do not have a finite domain, we can artificially introduce one by imposing a lower and an upper bound, and iteratively enlarge it until a solution is found (or the procedure times out).\n\n The key for the success of the approach is to determine, at each iteration, which domains have to be enlarged. Previously, unsatisfiable cores were used to identify the domains to be changed, but no clue was obtained as to how large the new domains should be. Here we explain two novel ways to guide this process by analyzing solutions to optimization problems: (i) to minimize the number of violated artificial domain bounds, solved via a Max-SMT solver, and (ii) to minimize the distance with respect to the artificial domains, solved via an" +"---\nabstract: 'We establish sharp asymptotically optimal strategies for the problem of online prediction with *history dependent experts*. The prediction problem is played (in part) over a discrete graph called the $d$ dimensional *de Bruijn graph*, where $d$ is the number of days of history used by the experts. Previous work [@drenska2019PDE] established $O({\\varepsilon})$ optimal strategies for $n=2$ experts and $d\\leq 4$ days of history, while [@drenska2020Online] established $O({\\varepsilon}^{1/3})$ optimal strategies for all $n\\geq 2$ and all $d\\geq 1$, where the game is played for $N$ steps and ${\\varepsilon}=N^{-1/2}$. In this paper, we show that the optimality conditions over the de Bruijn graph correspond to a graph Poisson equation, and we establish $O({\\varepsilon})$ optimal strategies for all values of $n$ and $d$.'\nauthor:\n- 'Jeff Calder[^1]'\n- Nadejda Drenska\nbibliography:\n- 'ref.bib'\ntitle: 'Asymptotically optimal strategies for online prediction with history-dependent experts'\n---\n\nIntroduction\n============\n\n*Prediction with expert advice* refers to problems in online machine learning [@CBL] where a player synthesizes advice from many experts to make predictions in real-time, often against an adversarial environment. The seminal work in the field is due to Cover [@cover1966behavior] and Hannan [@Hannan], and since then, the field has grown substantially. We refer to" +"---\nabstract: 'We present W-Net, a novel Convolution Neural Network (CNN) framework that employs raw ultrasound waveforms from each A-scan, typically referred to as ultrasound Radio Frequency (RF) data, in addition to the gray ultrasound image to semantically segment and label tissues. Unlike prior work, we seek to label every pixel in the image, without the use of a background class. To the best of our knowledge, this is also the first deep-learning or CNN approach for segmentation that analyses ultrasound raw RF data along with the gray image. International patent(s) pending \\[PCT/US20/37519\\]. We chose subcutaneous tissue (SubQ) segmentation as our initial clinical goal since it has diverse intermixed tissues, is challenging to segment, and is an underrepresented research area. SubQ potential applications include plastic surgery, adipose stem-cell harvesting, lymphatic monitoring, and possibly detection/treatment of certain types of tumors. A custom dataset consisting of hand-labeled images by an expert clinician and trainees are used for the experimentation, currently labeled into the following categories: skin, fat, fat fascia/stroma, muscle and muscle fascia. We compared our results with U-Net and Attention U-Net. Our novel *W-Net*\u2019s RF-Waveform input and architecture increased mIoU accuracy (averaged across all tissue classes) by 4.5% and 4.9% compared" +"---\nabstract: 'In LHC searches for new and rare phenomena the top-associated channel $pp \\to t\\overline{t}W^\\pm +X$ is a challenging background that multilepton analyses must overcome. Motivated by sustained measurements of enhanced rates of same-sign and multi-lepton final states, we reexamine the importance of higher jet multiplicities in $pp \\to t\\overline{t}W^\\pm +X$ that enter at $\\mathcal{O}(\\alpha_s^3\\alpha)$ and $\\mathcal{O}(\\alpha_s^4\\alpha)$, i.e., that contribute at NLO and NNLO in QCD in inclusive $t\\overline{t}W^\\pm$ production. Using fixed-order computations, we estimate that a mixture of real and virtual corrections at $\\mathcal{O}(\\alpha_s^4\\alpha)$ in well-defined regions of phase space can arguably increase the total $t\\overline{t}W^\\pm$ rate at NLO by at least $10\\%-14\\%$. However, by using non-unitary NLO multi-jet matching, we estimate that these same corrections are at most $10\\%-12\\%$, and at the same time exhibit the enhanced jet multiplicities that are slightly favored by data. This seeming incongruity suggests a need for the full NNLO result. We comment on implications for the $t\\overline{t}Z$ process.'\naddress:\n- 'School of Physics and Institute for Collider Particle Physics, University of the Witwatersrand, Wits, Johannesburg 2050, South Africa'\n- 'Centre for Cosmology, Particle Physics and Phenomenology [(CP3)]{}, Universit\u00e9 catholique de Louvain, Chemin du Cyclotron, Louvain-la-Neuve, B-1348, Belgium'\n- 'iThemba LABS, National" +"---\nabstract: |\n We consider fundamental algorithmic number theoretic problems and their relation to a class of block structured Integer Linear Programs (ILPs) called $2$-stage stochastic. A $2$-stage stochastic ILP is an integer program of the form $\\min \\{c^T x \\mid \\mathcal{A} x = b, \\ell \\leq x \\leq u, x \\in \\mathbb{Z}^{r + ns} \\}$ where the constraint matrix $\\mathcal{A} \\in \\mathbb{Z}^{nt \\times r +ns}$ consists of $n$ matrices $A_i \\in \\mathbb{Z}^{t \\times r}$ on the vertical line and $n$ matrices $B_i \\in \\mathbb{Z}^{t \\times s}$ on the diagonal line aside.\n\n First, we show a stronger hardness result for a number theoretic problem called [Quadratic Congruences]{} where the objective is to compute a number $z \\leq \\gamma$ satisfying $z^2 \\equiv \\alpha \\bmod \\beta$ for given $\\alpha, \\beta, \\gamma \\in \\mathbb{Z}$. This problem was proven to be NP-hard already in 1978 by Manders and Adleman. However, this hardness only applies for instances where the prime factorization of $\\beta$ admits large multiplicities of each prime number. We circumvent this necessity proving that the problem remains NP-hard, even if each prime number only occurs constantly often.\n\n Then, using this new hardness result for the ${\\textsc{Quadratic Congruences}}$ problem, we prove a lower" +"---\nabstract: 'We compare the performance of a quantum radar based on two-mode squeezed states with a classical radar system based on correlated thermal noise. With a constraint of equal number of photons $N_S$ transmitted to probe the environment, we find that the quantum setup exhibits an advantage with respect to its classical counterpart of $\\sqrt{2}$ in the cross-mode correlations. Amplification of the signal and the idler is considered at different stages of the protocol, showing that no quantum advantage is achievable when a large-enough gain is applied, even when quantum-limited amplifiers are available. We also characterize the minimal type-II error probability decay, given a constraint on the type-I error probability, and find that the optimal decay rate of the type-II error probability in the quantum setup is $\\ln(1+1/N_S)$ larger than the optimal classical setup, in the $N_S\\ll1$ regime. In addition, we consider the Receiver Operating Characteristic (ROC) curves for the scenario when the idler and the received signal are measured separately, showing that no quantum advantage is present in this case. Our work characterizes the trade-off between quantum correlations and noise in quantum radar systems.'\nauthor:\n- '\\'\n- '\\'\n- \nbibliography:\n- 'bibi.bib'\ntitle: A comparison between quantum" +"---\nabstract: 'The capacity of finite state channels (FSCs) has been established as the limit of a sequence of multi-letter expressions only and, despite tremendous effort, a corresponding finite-letter characterization remains unknown to date. This paper analyzes the capacity of FSCs from a fundamental, algorithmic point of view by studying whether or not the corresponding achievability and converse bounds on the capacity can be computed algorithmically. For this purpose, the concept of Turing machines is used which provide the fundamental performance limits of digital computers. To this end, computable continuous functions are studied and properties of computable sequences of such functions are identified. It is shown that the capacity of FSCs is not Banach-Mazur computable which is the weakest form of computability. This implies that there is no algorithm (or Turing machine) that can compute the capacity of a given FSC. As a consequence, it is then shown that either the achievability or converse must yield a bound that is not Banach-Mazur computable. This also means that there exist FSCs for which computable lower and upper bounds can never be tight. To this end, it is further shown that the capacity of FSCs is not approximable, which is an even" +"---\nabstract: 'Recommending Points-of-Interest (POIs) is surfacing in many location-based applications. The literature contains personalized and socialized POI recommendation approaches which employ historical check-ins and social links to make recommendations. However these systems still lack customizability (incorporating session-based user interactions with the system) and contextuality (incorporating the situational context of the user), particularly in cold start situations, where nearly no user information is available. In this paper, we propose , a POI recommendation system which tackles the challenges of cold start, customizability, contextuality, and explainability by exploiting look-alike groups mined in public POI datasets. reformulates the problem of POI recommendation, as recommending explainable look-alike groups (and their POIs) which are in line with user\u2019s interests. frames the task of POI recommendation as an exploratory process where users interact with the system by expressing their favorite POIs, and their interactions impact the way look-alike groups are selected out. Moreover, employs \u201cmindsets\u201d, which capture actual situation and intent of the user, and enforce the semantics of POI interestingness. In an extensive set of experiments, we show the quality of our approach in recommending relevant look-alike groups and their POIs, in terms of efficiency and effectiveness.'\nauthor:\n- 'Behrooz Omidvar-Tehrani, Sruthi Viswanathan, Jean-Michel" +"---\nabstract: 'The $f(R,T)$ gravity is a theory whose gravitational action depends arbitrarily on the Ricci scalar, $R$, and the trace of the stress-energy tensor, $T$; its field equations also depend on matter Lagrangian, $\\mathcal{L}_{m}$. In the modified theories of gravity where field equations depend on Lagrangian, there is no uniqueness on the Lagrangian definition and the dynamics of the gravitational and matter fields can be different depending on the choice performed. In this work, we have eliminated the $\\mathcal{L}_{m}$ dependence from $f(R,T)$ gravity field equations by generalizing the approach of Moraes in Ref.\u00a0[@Moraes2019]. We also propose a general approach where we argue that the trace of the energy-momentum tensor must be considered an \u201cunknown\u201d variable of the field equations. The trace can only depend on fundamental constants and few inputs from the standard model. Our proposal resolves two limitations: first the energy-momentum tensor of the $f(R,T)$ gravity is not the perfect fluid one; second, the Lagrangian is not well-defined. As a test of our approach we applied it to the study of the matter era in cosmology, and the theory can successfully describe a transition between a decelerated Universe to an accelerated one without the need for dark" +"---\nauthor:\n- Yashank Singh and Niladri Chatterjee\nbibliography:\n- 'ri.bib'\ntitle: Probabilistic Random Indexing for Continuous Event Detection\n---\n\nIntroduction {#sec:1}\n============\n\nEvent detection has been a common application of Machine Learning and NLP. The classic tech- niques in this regard typically involve Word Co-occurrence Matrix and its Singular Value Decomposition (SVD) [@SVD] to track meaning and relationship between different words to detect an event. However, in modern times these methods are found to be slow to cope up with the huge amount of new information that is created everyday on internet platforms, twitter being one of the most popular of them in this regard. Applications of these methods for detection of dynamic events suffer from three major problems, namely arbitrariness of the underlying language (English, here), exponentially increasing volume of data, and the curse of dimensionality.\n\nComputer processing of natural languages is a generally perceived as a huge challenge because\n\n- words are often ambiguous, i.e. one word can have several meanings (polysemy)\n\n- several words may refer to the same concept (synonymy).\n\nOn the other hand, in the context of online learning, documents may arrive continuously. Many of them may have some unseen words and thereby increasing" +"---\nabstract: 'We study a uniformly accelerated detector coupled to a massless scalar field for a finite time interval. By considering the detector initially prepared in a superposition state, qubit state, we find that the acceleration induces decoherence on the qubit. Our results suggest the dependence of loss of coherence on the polar angle of qubit state on a Bloch sphere and the time interaction. The adjust those parameters can significantly improve the conditions to estimate the degree of decoherence induced by Unruh radiation.'\nauthor:\n- 'Helder A. S. Costa'\ntitle: '**Decoherence of a uniformly accelerated finite-time detector**'\n---\n\n Introduction\n=============\n\nAccordingly to Unruh and Wald [@Unruh] a uniformly accelerated detector (i.e., two-level atom) coupled to a massless scalar field in the Minkowski vacuum perceive a thermal distribution of Rindler particles with a temperature proportional to its proper acceleration. This effect is usually named as Fulling-Davies-Unruh effect or simply Unruh effect [@Unruh2]. In original proposal, the Unruh effect has been analyzed in an in-out approach, i.e., the initial state of the system - detector plus quantum scalar field - is assumed to be prepared at past infinity and the out state is evaluated at infinite future. On the other hand," +"---\nabstract: '[We present orbits and their properties for 152 globular clusters of the Milky Way galaxy obtained using average Gaia DR2 proper motions and other astrometric data from the list of Vasiliev (2019). For orbit integrating we have used the axisymmetric model of the Galactic potential based on the Navarro-Frenk-White dark halo, and modified by Bajkova, Bobylev (2016, 2017) using circular velocities of Galactic objects in wide region of Galactocentric distances (up to 200 kpc) from Bhattacharjee et al. (2014) catalog. Based on the analysis of the obtained orbits, we have modified the composition of the subsystems of globular clusters presented in Massari et al. (2019).]{}'\n---\n\n0.5cm\n\n**Orbits of 152 Globular Clusters of the Milky Way Galaxy**\n\n**Constructed from the Gaia DR2 data**\n\nA.\u00a0T.\u00a0 Bajkova, V.\u00a0V. \u00a0Bobylev\n\n*Pulkovo Astronomical Observatory, St.-Petersburg, Russia, E-mail: bajkova@gaoran.ru*\n\nKey words: (Galaxy:) globular clusters: general\n\nIntroduction\n============\n\nThe appearance of accurate astrometric data from measurements from the Gaia satellite of the positions and spatial velocities of globular clusters (Helmi et al. 2018; Baumgardt et al. 2019; Vasiliev 2019) makes it possible to study their dynamics, origin and evolution (Myeong et al. 2019; Massari et al. 2019; Bajkova et al. 2020).\n\nIn" +"---\nauthor:\n- 'Eric Chesebro, Cory Emlen, Kenton Ke, Denise LaFontaine, Kelly McKinnie, Catherine Rigby'\nbibliography:\n- 'FRF.bib'\ntitle: Farey Recursive Functions\n---\n\nIntroduction\n============\n\nA [*second order linear recurrence relation*]{} is an expression of the form $$y_{n+1} = ay_{n-1}+by_{n}$$ where the $y_j$\u2019s are indeterminants and $a$ and $b$ are numbers. If we take $a=b=1$, $y_0=0$, and $y_1=1$, then the sequence of numbers $\\{ y_j \\}_0^\\infty$ which satisfies this relation is the well known sequence of [*Fibonacci numbers*]{} $$0, 1, 1, 2, 3, 5, 8, 13, \\ldots$$\n\nSecond order linear recurrence relations are prominent throughout mathematics and appear in surprising and diverse problems. There are also many generalizations. One possibility is to allow $a, b$, and the $y_j$\u2019s to be polynomials. For instance, the [*Fibonacci polynomials*]{} are defined by setting $y_0=0$, $y_1 = 1$, as with the first two Fibonacci numbers, and insisting that the remainder satisfy the recurrence relation $$y_{n+1} = y_{n-1}+xy_n.$$ The first few Fibonacci polynomials are $$0,1,x,x^2+1,x^3+2x, x^4+3x^2+1,\\ldots$$ Evidently, when the Fibonacci polynomials are evaluated at $x=1$, the result is the Fibonacci numbers. The Fibonacci polynomials share many interesting identities with the Fibonacci numbers (see e.g., [@benjamin Ch.9]) and just as the Fibonacci numbers solve many counting" +"---\nabstract: 'We study the impact of large lepton flavour asymmetries on the cosmic QCD transition. Scenarios of *unequal* lepton flavour asymmetries are observationally almost unconstrained and therefore open up a whole new parameter space for the cosmic QCD transition. We find that for large asymmetries the formation of a Bose-Einstein condensate of pions can occur and identify the corresponding parameter space. In the vicinity of the QCD transition scale, we express the pressure in terms of a Taylor expansion with respect to the complete set of chemical potentials. The Taylor coefficients rely on input from lattice QCD calculations from the literature. The domain of applicability of this method is discussed.'\nauthor:\n- 'Mandy M. Middeldorf-Wygas'\n- 'Isabel M. Oldengott'\n- Dietrich B\u00f6deker\n- 'Dominik J. Schwarz'\nbibliography:\n- 'Literature.bib'\ntitle: The cosmic QCD transition for large lepton flavour asymmetries\n---\n\nIntroduction {#sec:Intro}\n============\n\nThe recent direct detection of gravitational waves (GWs) by the LIGO/Virgo collaboration [@Abbott:2016blz] has revived the interest in phase transitions in the early Universe [@Caprini:2018mtu]. In general, first-order phase transitions can be accompanied by processes that lead to the emission of GWs, while crossovers do not lead to a strong enhancement over the primordial GW spectrum." +"---\nabstract: 'This paper studies the mathematical properties of collectively canalizing Boolean functions, a class of functions that has arisen from applications in systems biology. Boolean networks are an increasingly popular modeling framework for regulatory networks, and the class of functions studied here captures a key feature of biological network dynamics, namely that a subset of one or more variables, under certain conditions, can dominate the value of a Boolean function, to the exclusion of all others. These functions have rich mathematical properties to be explored. The paper shows how the number and type of such sets influence a function\u2019s behavior and define a new measure for the canalizing strength of any Boolean function. We further connect the concept of collective canalization with the well-studied concept of the average sensitivity of a Boolean function. The relationship between Boolean functions and the dynamics of the networks they form is important in a wide range of applications beyond biology, such as computer science, and has been studied with statistical and simulation-based methods. But the rich relationship between structure and dynamics remains largely unexplored, and this paper is intended as a contribution to its mathematical foundation.'\nauthor:\n- Claus Kadelka\n- Benjamin Keilty" +"---\nabstract: 'The Uehling contribution to the Lamb shift can be computed exactly in terms of the Uehling potential function. However derivations of this function are complex involving avoiding divergences using intricate techniques from early quantum field theory (QFT) or else more modern approaches using charge and mass renormalization. In the present paper we derive the Uehling potential function in a fairly conceptually straightforward way not involving renormalization in which the vacuum polarization tensor is viewed as a Lorentz invariant 2-tensor valued measure on Minkowski space. Furthermore we compute a complex matrix valued potential function for the electron self-energy contribution to the Lamb shift. The resulting potential function is derived in a conceptually simple way not involving renormalization and can be used for higher order computations in QFT involving multiple loops.'\nauthor:\n- |\n John Mashford\\\n School of Mathematics and Statistics\\\n University of Melbourne, Victoria 3010, Australia\\\n E-mail: mashford@unimelb.edu.au\ntitle: '**Computation of the leading order contributions to the Lamb shift for the H atom using spectral regularization**'\n---\n\nIntroduction\n============\n\nThe Lamb shift is a phenomenon which is closely tied up with the instigation and development of quantum field theory (QFT). At the time of its discovery in 1947 by" +"---\nabstract: 'We investigate the statistics of encounters of a diffusing particle with different subsets of the boundary of a confining domain. The encounters with each subset are characterized by the boundary local time on that subset. We extend a recently proposed approach to express the joint probability density of the particle position and of its multiple boundary local times via a multi-dimensional Laplace transform of the conventional propagator satisfying the diffusion equation with mixed Robin boundary conditions. In the particular cases of an interval, a circular annulus and a spherical shell, this representation can be explicitly inverted to access the statistics of two boundary local times. We provide the exact solutions and their probabilistic interpretation for the case of an interval and sketch their derivation for two other cases. We also obtain the distributions of various associated first-passage times and discuss their applications.'\nauthor:\n- 'Denis\u00a0S.\u00a0Grebenkov'\ntitle: |\n Joint distribution of multiple boundary local times\\\n and related first-passage time problems with multiple targets\n---\n\nIntroduction\n============\n\nDiffusion-controlled reactions and related stochastic processes in an Euclidean domain $\\Omega\\subset \\R^d$ are typically described by the propagator (also known as the heat kernel or the Green\u2019s function), $G_q(\\x,t|\\x_0)$, that is" +"---\nabstract: 'Using a recently developed quantum embedding theory, we present first principles calculations of strongly correlated states of spin defects in diamond. Within this theory, effective Hamiltonians are constructed, which can be solved by classical and quantum computers; the latter promise a much more favorable scaling as a function of system size than the former. In particular, we report a study of the neutral group-IV vacancy complexes in diamond, and we discuss their strongly-correlated spin-singlet and spin-triplet excited states. Our results provide valuable predictions for experiments aimed at optical manipulation of these defects for quantum information technology applications.'\nauthor:\n- He Ma\n- Nan Sheng\n- Marco Govoni\n- Giulia Galli\nbibliography:\n- 'ref.bib'\ntitle: 'First-principles Studies of Strongly Correlated States in Defect Spin Qubits in Diamond'\n---\n\nIntroduction\n============\n\nElectron spins in molecular and condensed systems are important resources for the storage and process of quantum information [@Weber2010]. In the past decades, several spin-defects in wide band gap semiconductors and insulators have been widely studied, in particular in diamond [@Doherty2013], silicon carbide [@Weber2011; @Christle2015], and aluminum nitride [@Seo2016; @Seo2017]. The prototype example of spin-defects is the negatively-charged nitrogen-vacancy center (NV) center in diamond [@Davies1976; @Rogers2008; @Doherty2011; @Maze2011; @Choi2012;"